RWJC Posted October 13, 2023 Share Posted October 13, 2023 Artificial intelligence could lead to extinction, experts warn Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned. Dozens have supported a statement published on the webpage of the Centre for AI Safety. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads. But others say the fears are overblown. Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement. The Centre for AI Safety website suggests a number of possible disaster scenarios: AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons AI-generated misinformation could destabilise society and "undermine collective decision-making" The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship" Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E" Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call. Yoshua Bengio, professor of computer science at the university of Montreal, also signed. Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the "godfathers of AI" for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science. But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that "the most common reaction by AI researchers to these prophecies of doom is face palming". 'Fracturing reality' Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem. Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI is nowhere near capable enough for these risks to materialise. As a result, it's distracted attention away from the near-term harms of AI". Oxford's Institute for Ethics in AI senior research associate Elizabeth Renieris told BBC News she worried more about risks closer to the present. "Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable," she said. They would "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide". Many AI tools essentially "free ride" on the "whole of human experience to date", Ms Renieris said. Many are trained on human-created content, text, art and music they can then imitate - and their creators "have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities". But Centre for AI Safety director Dan Hendrycks told BBC News future risks and present concerns "shouldn't be viewed antagonistically". "Addressing some of the issues today can be useful for addressing many of the later risks tomorrow," he said. Superintelligence efforts Media coverage of the supposed "existential" threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the development of the next generation of AI technology. That letter asked if we should "develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us". In contrast, the new campaign has a very short statement, designed to "open up discussion". The statement compares the risk to that posed by nuclear war. In a blog post OpenAI recently suggested superintelligence might be regulated in a similar way to nuclear energy: "We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts" the firm wrote. 'Be reassured' Both Sam Altman and Google chief executive Sundar Pichai are among technology leaders to have discussed AI regulation recently with the prime minister. Speaking to reporters about the latest warning over AI risk, Rishi Sunak stressed the benefits to the economy and society. "You've seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure," he said. "Now that's why I met last week with CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what's the type of regulation that should be put in place to keep us safe. "People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars. "I want them to be reassured that the government is looking very carefully at this." He had discussed the issue recently with other leaders, at the G7 summit of leading industrialised nations, Mr Sunak said, and would raise it again in the US soon. The G7 has recently created a working group on AI. By Chris Vallance https://www.bbc.com/news/uk-65746524 ————————————————— AI's Electricity Use Is Spiking So Fast It'll Soon Use as Much Power as an Entire Country All That Power AI chatbots like OpenAI's ChatGPT and Google's Bard consume an astronomical amount of electricity and water — or, more precisely, the massive data centers that power them do. And according to the latest estimates, those energy demands are rapidly ballooning to epic proportions. In a recent analysis published in the journal Joule, data scientist Alex de Vries at Vrije Universiteit Amsterdam in the Netherlands found that by 2027, these server farms could use anywhere between 85 to 134 terawatt hours of energy per year. That's roughly on par with the annual electricity use of Argentina, the Netherlands, or Sweden, as the New York Times points out, or 0.5 percent of the entire globe's energy demands. Sound familiar? The much-lampooned crypto industry spiked past similar power consumption thresholds in recent years. It's a massive carbon footprint that experts say should force us to reconsider the huge investments being made in the AI space — not to mention the eye-wateringly resource-intensive way that tech giants like OpenAI and Google operate. We Hunger Coming to an exact figure is difficult, since AI companies like OpenAI are secretive about their energy usage. De Vries settled on estimating their consumption by examining the sales of Nvidia A100 servers, which make up an estimated 95 percent of the AI industry's underlying infrastructure. "Each of these Nvidia servers, they are power-hungry beasts," de Vries told the NYT. It's a worrying trend that's leading some experts to argue that we should take a step back and reevaluate the trend. "Maybe we need to ideally slow a bit down to start applying solutions that we have," Roberto Verdecchia, an assistant professor in the University of Florence, told the newspaper. "Let’s not make a new model to improve only its accuracy and speed. But also, let’s take a big breath and look at how much are we burning in terms of environmental resources." Many companies operating in California in particular may face opposition earlier than you'd think. Over the weekend, California governor Gavin Newsom signed two major climate disclosure laws, forcing companies like OpenAI and Google, among roughly 10,000 other firms, to disclose how much carbon they produce by 2026. Even with increased scrutiny from regulators, the space is still largely governing itself, and AI companies will likely continue to burn through copious amounts of energy to keep their models going. There is, however, a financial incentive to lower these costs through technological advances, given the current burn rate. And considering the massive environmental footprint, any breakthroughs can't come soon enough. https://ca.yahoo.com/news/ais-electricity-spiking-fast-itll-155753215.html 1 Quote Link to comment Share on other sites More sharing options...
Bounce000 Posted October 13, 2023 Share Posted October 13, 2023 Be polite to Alexa/Siri and your Roombas now and they’ll spare you. 1 1 Quote Link to comment Share on other sites More sharing options...
6of1_halfdozenofother Posted October 13, 2023 Share Posted October 13, 2023 1 1 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 13, 2023 Author Share Posted October 13, 2023 4 minutes ago, 6of1_halfdozenofother said: My smart fridge tried to order a second smart fridge by itself this morning… Quote Link to comment Share on other sites More sharing options...
6of1_halfdozenofother Posted October 13, 2023 Share Posted October 13, 2023 1 hour ago, RWJC said: My smart fridge tried to order a second smart fridge by itself this morning… Little bites, but big appliances! 1 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 13, 2023 Author Share Posted October 13, 2023 8 minutes ago, 6of1_halfdozenofother said: Little bites, but big appliances! I think it wants to mate and multiply. 1 Quote Link to comment Share on other sites More sharing options...
Satchmo Posted October 13, 2023 Share Posted October 13, 2023 Thanks RWJC. I've been thinking of starting an AI thread myself. AI has the potential to do wonderful things for us. It also has the potential to do immense harm. One scary aspect is even Geoffrey Hinton, the 'Godfather of AI' admits that no one really understands how it works. I spent 30 years as a programmer. One thing I learned is that you can't fully trust programmers. Not because we are evil, just because we can really screw up sometimes. I remember Warhippy wrote the second post in the lengthy covid thread on CDC. I'll quote his brief but cognizant post: 'Well this is terrifying' 2 1 Quote Link to comment Share on other sites More sharing options...
Sharpshooter Posted October 14, 2023 Share Posted October 14, 2023 AI is a double edged sword. It will simultaneously reflect the best and worst of ‘us’. 1 1 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 14, 2023 Author Share Posted October 14, 2023 4 minutes ago, Sharpshooter said: AI is a double edged sword. It will simultaneously reflect the best and worst of ‘us’. Agreed. Eg. While it may become the greatest tool in the evolution of modern medical sciences, in contrast I think it will greatly affect how we interact individually within the framework of being a social species. Further de-evolution into the darkest depths of self-serving sentient beings. ”progress” in whatever shape or form is inevitable. 1 Quote Link to comment Share on other sites More sharing options...
Slegr Posted October 14, 2023 Share Posted October 14, 2023 (edited) I love computers. They’re neat. I want to be friends with them. And it was an accident when I threw out my previous one. Edited October 14, 2023 by Slegr 2 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 14, 2023 Author Share Posted October 14, 2023 19 minutes ago, Slegr said: I love computers. They’re neat. I want to be friends with them. And it was an accident when I threw out my previous one. You didn’t throw it out. You set it free, remember? You raised it as best you could as just a mere human, but it outgrew you and rather than be selfish and keep it yourself, you generously bestowed a new life upon that silicone chip butterfly. You’re an example of a good human. A useful one. One the binary beings shouldn’t turn into glue…for now. 1 1 Quote Link to comment Share on other sites More sharing options...
6of1_halfdozenofother Posted October 14, 2023 Share Posted October 14, 2023 4 hours ago, Sharpshooter said: It will simultaneously reflect the best and worst of ‘us’. If you stick around long enough, you will probably find that these words hit the mark. AI is a human construct. It will reflect our logic - good or evil - but it will perform it with no thoughts about morality or ethics. And it will perform it to its extreme. And it will get weaponized, for good and for bad. 1 1 Quote Link to comment Share on other sites More sharing options...
Popular Post Playoff Beered Posted October 14, 2023 Popular Post Share Posted October 14, 2023 3 2 Quote Link to comment Share on other sites More sharing options...
PeteyBOI Posted October 15, 2023 Share Posted October 15, 2023 "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads. Should? what a sentence to drum up the seriousness of the situation... Lets just say I have a lot of experience with AI, This Thing is going to influence change in the world faster 10X more than the invention of the combustion engine... the people that mock it for screwing up answering the most complicated questions known to man, or asking it questions about events that have never happened are up for a rude awakening. Every single professional can use this technology to improve their efficiency, you just need to know what questions to ask... if you have a small Business you should really consider investing in hiring or at least having a long discussion with a consultant or with someone that has expertise in utilizing the technology... BTW im available for hire... will work for peanuts if you thought layoffs were bad in the last twenty years, you have seen nothing yet.... 3 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 16, 2023 Author Share Posted October 16, 2023 (edited) AI will be critical for the future of rural health care in Canada, experts say There won't be androids rushing through hospitals or drones hovering to triage patients just yet, but artificial intelligence is starting to make the rounds when it comes in health care in Canada. As the technology evolves and becomes more mainstream, while staying firmly behind the computer monitor, experts say rural Canadians may have the most to gain. Dr. Alex Wong is the Canada Research Chair for Artificial Intelligence. While he says the country at large will benefit from AI making health care work more efficient, rural Canadians will see "an even greater impact" as the science helps doctors, nurses and specialists in regions with fewer staff. "Resources are even more limited … that's where AI can really come in place," Wong said. “When you see a doctor on a computer, they're looking at images, records and data. Now you have this additional AI that provides additional insights and information”. — Dr. Alex Wong "With their expertise having seen this big worldview of thousands, to hundreds of thousands, to millions of different patients, it's able to [take] that knowledge and bring it to rural areas to help improve diagnosis and improve treatment." Ontario alone will be short 33,000 nurses and personal support workers by 2027, the province's auditor general estimated last year. Harnessing artificial intelligence to do computer record-keeping could lessen the pain of that kind of shortage. Wong and others say artificial intelligence will help health-care providers with things like: Organizing the mountain of paperwork that human staff have to handle currently. Taking stress off the system by making patient records and histories much easier to access. Assisting with staff scheduling, with a focus on anticipating when shortages will crop up. Examining X-rays, MRI scans, CT scans and other digital images that doctors and specialists now study, and providing extremely accurate diagnoses. Some of those tasks are already being carried out by AI health systems in Toronto and Montreal. On a panel for The National, Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto, said some tasks that "normally take two to four hours every day by a few people, it's reduced to under 15 minutes." Alex Wong says AI is starting to be used to help clinicians literally see and do more, and have more data on patient conditions in their hands quickly. Wong said doctors of the near future will use AI as a "clinical vision support system" that will give staff more insight into illness when they interact with patients. "When you see a doctor on a computer, they're looking at images, records and data. Now you have this additional AI that provides additional insights and information," Wong said. "Essentially you treat it as a second recommendation." AI emerging as vital tool in rural Australia Similar experiments are being done in Australia. Like Canada, it's a country that has a large landmass with many rural and remote areas, and fewer staff and specialists in those areas. The country has its own unique challenges. Care units like the Royal Flying Doctors Service are regularly deployed to provide care to the most remote communities, but what AI can do in rural areas is starting to become more widely understood. 'If AI in health care has to be described in one word, it's assistance. That's what it's there for,' says Dr. Stefan Harrer in Melbourne, Australia. (Digital Health CRC) Dr. Stefan Harrer is chief innovation officer for the Digital Health Cooperative Research Centre. He said AI will cut down significantly on the paperwork that preoccupies health care workers countrywide. "The degree of documentation and reporting that clinicians have to undergo everyday is overwhelming, right, so they spend way more than half their time on writing summaries, producing discharge reports… creating medical reports," he said in an interview from Melbourne. We're in a very, very exciting time right now, where … the appetite to use AI is unprecedented. — Dr. Stefan Harrer "That is a massive inefficiency, and eats up a lot of the potential and energy that clinicians could bring to other parts of their roles, interacting with patients, actually treating patients." The federal Department of Science says it is "committed to ensuring all Australians share the benefits of artificial intelligence," calling it a critical technology of national interest that could help solve health challenges. On the ground, a company named DrumBeat AI uses images of patients' inner ears to identify ear disease in Indigenous children in remote parts of Australia. It has made local and national news for how it's helping people. The DrumBeat AI website says Indigenous children living in remote parts of Australia have the highest rates of ear disease in the world. The company lets health staff use smartphones and AI to screen kids' ears for disease, 'bringing world-leading tech to the Aussie Outback.' (Drumbeat AI) "To help healthcare workers with limited experience to instantly triage ear disease and detect hearing loss, our team has developed and published the first artificial intelligence (AI) algorithm for Indigenous children," said Dr. Al-Rahim Habib, project lead with DrumBeat in an email. "The overarching purpose of DrumBeat.ai is to enhance the capacity of frontline healthcare workers in rural and remote areas to quickly identify ear disease, inform judgment, and improve clinical decision-making." Harrer said what DrumBeat and others in the field are doing is life changing for patients, and crucial to the future of health care in rural areas. "There aren't always experts on the ground in these rural communities to perform these checks, do this monitoring and get the diagnosis right," Harrer said. "That's an application where an AI-driven, cloud-based — you could call it a tele-health solution — brings immediate value and impact to improving the health of rural communities and Indigenous communities." 'AI does not, ever, replace humans' The technology isn't without controversy, however. In Australia, as in Canada, there are concerns about cybersecurity, safety, regulation and how the use of AI could affect jobs. Both Harrer and Wong say AI systems will need to have regulation and oversight. "They all help the human, right? They all assist humans in empowering them to do it better, do it faster and have more impact with what they do," Harrer said. "AI does not, ever, replace humans. That is not where this is going ... if AI in health care has to be described in one word, it's 'assistance.' That's what it's there for. Not replacement." Wong said the goal is to help clinicians literally see and do more, and have more data in their hands quickly. "Doctors are indispensable, nurses are indispensable, health-care workers are indispensable," Wong said. "If we can help them better, than they can see more patients, they can have greater consistency in their diagnoses and patient treatment." Internet a possible stumbling block Part of the issue Canada faces in using AI is that many rural communities still don't have stable access to high-speed internet. Infrastructure as a whole is still experiencing a "persistent digital divide," Auditor General Karen Hogan said earlier this year regarding rural connections. Ottawa has set a goal of connecting 98 per cent of Canadians to high-speed internet by 2026, with universal access by 2030. Health Minister Mark Holland said last week as health ministers met in Charlottetown that digital record sharing in health-care will be a huge priority for federal, provincial and territorial governments going forward. Digital health care was one of the topics on the agenda as federal, provincial and territorial health ministers met in Charlottetown this week. (Ken Linton/CBC) That would make it easier to roll out AI health care technology at a time when countries similar to Canada are thinking about the very same thing. "Bringing health care to these communities is a key imperative of the Australian health-care system," Harrer said. "We're in a very, very exciting time right now, where … the appetite to use AI is unprecedented. There is absolutely a reason to be excited, and positive and inspired by where this leads … Health care and medicine is where the stakes are highest." (Chris Young/The Canadian Press) https://ca.yahoo.com/news/ai-critical-future-rural-health-100000609.html Edited October 16, 2023 by RWJC Quote Link to comment Share on other sites More sharing options...
Sharpshooter Posted October 16, 2023 Share Posted October 16, 2023 17 hours ago, Dankmemes187 said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads. Should? what a sentence to drum up the seriousness of the situation... Lets just say I have a lot of experience with AI, This Thing is going to influence change in the world faster 10X more than the invention of the combustion engine... the people that mock it for screwing up answering the most complicated questions known to man, or asking it questions about events that have never happened are up for a rude awakening. Every single professional can use this technology to improve their efficiency, you just need to know what questions to ask... if you have a small Business you should really consider investing in hiring or at least having a long discussion with a consultant or with someone that has expertise in utilizing the technology... BTW im available for hire... will work for peanuts if you thought layoffs were bad in the last twenty years, you have seen nothing yet.... The layoffs are going to be massive for those who don’t have a skill that can’t be assigned to AI. The Unions are going to throw a massive fit. I feel for all those that are inevitably going to be affected. People in College and Universities now and in the next 20-30 years should be looking to practical knowledge or the ones creating self-defence code to combat the inevitable rise of SKYNET. There is no future but what we make. 3 Quote Link to comment Share on other sites More sharing options...
Alflives Posted October 16, 2023 Share Posted October 16, 2023 1 minute ago, Sharpshooter said: The layoffs are going to be massive for those who don’t have a skill that can’t be assigned to AI. The Unions are going to throw a massive fit. I feel for all those that are inevitably going to be affected. People in College and Universities now and in the next 20-30 years should be looking to practical knowledge or the ones creating self-defence code to combat the inevitable rise of SKYNET. There is no future but what we make. Well said John Connor. Even with all this AI stuff we’re still going to need the guys who get their hands dirty. 1 1 1 Quote Link to comment Share on other sites More sharing options...
Sharpshooter Posted October 16, 2023 Share Posted October 16, 2023 6 minutes ago, Alflives said: Well said John Connor. Even with all this AI stuff we’re still going to need the guys who get their hands dirty. Upvoted. It’s those with business acumen around plumbing, electrical, water works, desalinization, building, renovations, etc that are going to be the next winners. At least in the middle to upper middle class. Those that harness those folks into a business with far reaching capabilities and capital are going to be the next millionaires/billionaires. The ‘code’ is what will dictate income going forward. There will be pushback and I suspect violence will be involved, eventually. I say this as a cogent and intelligent(I hope) individual. AI is the one thing that many or most aren’t taking seriously as the thing that’s going to change the trajectory of your, your kids’, your grandkids’ life. Don’t say you haven’t been warned. Now, the question and conversation is and should be, how do we protect ourselves and our progeny? AI is here in a small and non-threatening way to our lifestyle, for the most part. When AI gets to the point where it does threaten all that, what could we have done to roll with it and have been prepared for it? The threat is apparent. No point arguing about that. What are the mitigating solutions and preparations that we can discuss? Let’s go! Discuss ‘this’. 2 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 16, 2023 Author Share Posted October 16, 2023 (edited) 3 minutes ago, Sharpshooter said: Upvoted. It’s those with business acumen around plumbing, electrical, water works, desalinization, building, renovations, etc that are going to be the next winners. At least in the middle to upper middle class. Those that harness those folks into a business with far reaching capabilities and capital are going to be the next millionaires/billionaires. The ‘code’ is what will dictate income going forward. There will be pushback and I suspect violence will be involved, eventually. I say this as a cogent and intelligent(I hope) individual. AI is the one thing that many or most aren’t taking seriously as the thing that’s going to change the trajectory of your, your kids’, your grandkids’ life. Don’t say you haven’t been warned. Now, the question and conversation is and should be, how do we protect ourselves and our progeny? AI is here in a small and non-threatening way to our lifestyle, for the most part. When AI gets to the point where it does threaten all that, what could we have done to roll with it and have been prepared for it? The threat is apparent. No point arguing about that. What are the mitigating solutions and preparations that we can discuss? Let’s go! Discuss ‘this’. I need some help with this. Anyone? Honestly though, with the advent of 3D printing and the inevitability of design innovations, even aspects of the trades you mention will be impacted. I don’t think there is a way of fighting this. It’s the human condition…we have forever been looking for ways to challenge ourselves that potentially could render us extinct. It’s our fatal flaw. And I fear AI is simply the next stage in the “evolution” of our species in that we will have to rely on it and it’s integration into our physical being to sustain life on this planet Edited October 16, 2023 by RWJC Quote Link to comment Share on other sites More sharing options...
Sharpshooter Posted October 16, 2023 Share Posted October 16, 2023 Just now, RWJC said: I need some help with this. Anyone? All I got is this right now, 1 Quote Link to comment Share on other sites More sharing options...
6of1_halfdozenofother Posted October 16, 2023 Share Posted October 16, 2023 21 minutes ago, RWJC said: It’s the human condition…we have forever been looking for ways to challenge ourselves that potentially could render us extinct. It’s our fatal flaw. This. Totally this. Knives and spears not lethal enough? Let's try gunpowder. Gunpowder too limiting? Let's build a nuke. That'll wipe out a few generations, and make things toxic for a wihle. Meanwhile, let's pump all sorts of nasty shit into our air and water, to poison ourselves. Not enough? Fine. Let's cook ourselves alive by burning fossil fuels to change our climate until it becomes inhospitable. And with that climate change, let's fuck around with animals, see if we can't get some sort of zoonotic disease to plague us. Still around? Ok, how about designing something using our logic that can run by itself, learn from us and itself, and eventually take us over - first on menial tasks, then on more complex decision making, and then fully fucking automate it so that it no longer answers to anyone. Yah. That'd be cool. Humans. Can't live with them. Can't let them live. 1 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 16, 2023 Author Share Posted October 16, 2023 (edited) 5 minutes ago, 6of1_halfdozenofother said: This. Totally this. Knives and spears not lethal enough? Let's try gunpowder. Gunpowder too limiting? Let's build a nuke. That'll wipe out a few generations, and make things toxic for a wihle. Meanwhile, let's pump all sorts of nasty shit into our air and water, to poison ourselves. Not enough? Fine. Let's cook ourselves alive by burning fossil fuels to change our climate until it becomes inhospitable. And with that climate change, let's fuck around with animals, see if we can't get some sort of zoonotic disease to plague us. Still around? Ok, how about designing something using our logic that can run by itself, learn from us and itself, and eventually take us over - first on menial tasks, then on more complex decision making, and then fully fucking automate it so that it no longer answers to anyone. Yah. That'd be cool. Humans. Can't live with them. Can't let them live. I keep telling myself how fortunate we are to be living in this phase in human history. I imagine 75 years from now will be a very difficult existence that will REQUIRE major modifications to the normal physiological functions of a human being. Perhaps I’ll be totally wrong, but the speed at which technology is transforming life around is, whether readily visible or not, is far too consuming in its nature to ever slow it down enough that we might successfully manage it. Edited October 16, 2023 by RWJC 1 Quote Link to comment Share on other sites More sharing options...
6of1_halfdozenofother Posted October 16, 2023 Share Posted October 16, 2023 1 minute ago, RWJC said: I keep telling myself how fortunate we are to be living in this phase in human history. I imagine 75 years from now will be a very difficult existence that will REQUIRE major modifications to the normal physiological functions of a human being. Perhaps I’ll be totally wrong, but the speed at which technology is transforming life around is, whether readily visible or not, is far too consuming in its nature to ever slow it down. It's also why I'm no longer keen on having kids, even though as a kid, I had big dreams of perpetuating my progeny (I'm the only male in a family where the 3 generations prior on my dad's side were also "only son"). I really don't want to subject my descendants to the hell that will unfold in the coming years. I probably won't be around to see the end, but I suspect we're already in the "beginning of the end" stage of humanity as the dominant species/"lifeform". 1 Quote Link to comment Share on other sites More sharing options...
Alflives Posted October 16, 2023 Share Posted October 16, 2023 10 minutes ago, RWJC said: I keep telling myself how fortunate we are to be living in this phase in human history. I imagine 75 years from now will be a very difficult existence that will REQUIRE major modifications to the normal physiological functions of a human being. Perhaps I’ll be totally wrong, but the speed at which technology is transforming life around is, whether readily visible or not, is far too consuming in its nature to ever slow it down enough that we might successfully manage it. Read “The Mote in God’s Eye”. The comes a point where we cease, almost. 2 Quote Link to comment Share on other sites More sharing options...
RWJC Posted October 16, 2023 Author Share Posted October 16, 2023 2 minutes ago, Alflives said: Read “The Mote in God’s Eye”. The comes a point where we cease, almost. I will, thanks for the tip 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.