Forums / Discussion / Serious Debate

14,150 total conversations in 684 threads

+ New Thread


Futurology Discussion Time

Last posted Dec 30, 2017 at 08:34PM EST. Added Nov 18, 2017 at 01:08AM EST
127 posts from 12 users

Speaking of disease and gene engineering I think we're probably going to create a new species of humans in the next fifty years.

"How?"
Well what is being looked at very carefully is recoding the human genome. I'm going to use a analogy here, yes I am aware that the analogy has a couple fallacies in it it's a analogy to simplify a complex discussion into a brief concept for the sake of putting a idea forth. You know how much malware, viruses and such there are for windows operating systems? If Microsoft redid most of the coding most old viruses wouldn't work anymore, the reason why they don't is that while it'd be functionally the same it would take a lot of effort to do.

"What does this have to do with gene engineering?"
Well if you redid a lot of the human genome simply put most contagions wouldn't know what the fuck they were working with. Not to mention you could use artificial dna molecules, granted we've only made one more type but you get the point. Using unnatural bases can drastically reduce how long a chromosome has to be cause you can build more types of amino acids; the good thing about making a chromosome shorter is that it reduces the chances of replication errors. Those artificial base pair don't do anything yet, cause they're new though.

The downside of this is that if we do this then people who do have their genome recoded will have drastic and massive genetic differences. They'll look the same, have the same organs and everything, but their genome would bear very little resemblance to modern humans.

Basically if we stop all disease forever for humans we might in turn create a new species of humans.

In all this we see technology as the measure of change. Looking back 50 years to 1967 I see little actual change in much of anything. Yes, we communicate with more people over a larger distance via a wider range of connections and with more speed. But we still have the same range of ideas about politics, religion, society, and economics. And if we have the same ideas in conflict with each other nothing will really change. So. My vision of the next 50 years is:

1) If it can be done the rich will do it somewhere.
2) If the rich do it the less rich will want to do it to look like they are the rich.
3) If the less rich find a way to do it the merely financially stable will want to do it so that they look like the less rich.
4) If the merely financially stable find a way to do it, the barely financially stable will want to do it so they look like the financially stable.
5) If the barely financially stable find a way to do it, the financially unstable will want to do it so they look at least barely financially stable.

Since the number of people in each group is larger than the preceding group by a factor of 2, in order to maximize profits corporations will find a way to get each group in turn, to afford it. Thus, anything which can be done will be done by everybody.

Given this is the scenario of the past and of the future we can assume that in 50 years the technologies developed in the next 30 years will be in the hands of everybody (cell phones are the current example as you find them in all parts of the world, even the poorest).

Given that as the technologies develop to replace workers, those workers at the bottom the ability or skill levels will not be employed -- except as necessary consumers. Thus, the money to pay the lowest level of consumer will come from the upper classes via taxes, but the quality of the products purchased by the lowest level of consumer will be quite low and the gap between the quality of the top tier products and the bottom tier products will increase. At the top a person will purchase a device which links him or her to everything, anticipates what he or she needs at the moment, supplies it "just in time" and automates their entire life and work with that person's particular abilities "linked in" to the global system to create new and interesting products. At the bottom the same device will link the person to the global system and track their debt level until he/she is so far in debt that they can no longer be connected. Legally they will lose their vote, their home, and become the disconnected.

The population will stabilize around 12 billion with 80% living in "low rent" areas and the rest in physically isolated (read "gated") communities. Since transportation will be automated public transit will be for the "poor" and private transportation will be a privilege of the rich.

The gap between the 20% and the 80% will be, at first, along intelligence and creativity lines, but eventually the rich will simply ignore any measures of intelligence as they shove their sons and daughters into the jobs available to the connected, and then cover up their sons and daughters poor performance by purchasing whatever they can from whatever source they can. There will be a thriving trade in "black market IQ" performance enhancing drugs or even "stand-in" persons who were born in the 80% but have the talents to function in the 20% even if they don't have access.

These things will be mostly in the developed world as the rest of the world will be pretty much ignored and eventually cut off. Immigration will be a thing of the past except for the movement of the top 20% of the world into Europe, the US, Japan and perhaps a couple of other countries, maybe Australia and New Zealand.

China will be lost again. Why? Because the middle class of China will continue to grow artificially as China continues to thumb it's nose at basic economic realities and tries to continue it rapid economic growth. One of the most basic and fundamental rules of history is that economics always wins in the wind. Any government which artificially changes the value of anything for too long will have to pay the penalty for it's attempts. And that penalty, in China's case, will be a massive war that ruins them. As they continue to stumble (they have already started) and their economy begins to play the wild pendulum game (as they try to stabilize it without reforming it) the swings will mean vast amounts of those newly minted middle-class consumers will be returned to poverty and with that the leaders of China will find it necessary to find some way to avoid the massive revolt about to explode. To do that they will manufacture some foreign (external) threat and go to war. My guess will be against Russia as Russia has the vast resources needed to fuel the Chinese economy and keep the economy going at least for 20-30 years.

But since the war will not reform their economy it will destabilize it even more as they borrow from the US and other countries. If they war lasts more than a year the Chinese economy will never recover. If it lasts less than a year the Chines economy will not reform and it will not recover.

So the US, Europe and Japan will thrive under a vast socialist/capitalist hybrid with 20% doing all the work and the rest of us receiving "subsistence" allotments, while the rest of the world will be pretty much the same as always, a back-water, relatively primitive area of little hope or development. And China will join them.

AJ

@ajqrtz
interesting. What do you think the fate of democracy will be though. There's a lot of theories that suggest democracies exist because investing in people is profitable, and by giving people the vote they run less overall risk of losing power, as mass revolt won't happen. This explains why countries that have oil and can't make as much profit off their people tend to be undemocratic, as their primary resource isn't human labor. But if the masses are no longer profitable as they cannot work, and the government now has the weapons to put down any revolt, can democracy even survive those conditions?

@ajqrtz
The sad thing is we're already seeing China slowly starting to swing that way. It was in the news a while back, but it's pretty obvious that shills were trying to bury it, but recently the chinese government started defaulting on some of their loans. That's a drastic difference from how the USA or other countries have debt and such; if the USA started defaulting on it's debts people would be shitting themselves.

@documents1
More than likely computers will just run the government. Recently SAM, obviously a ai, was developed as a politician that polls, gets feedback from people, engages in dialogue with everyone and such in order to form a opinion. If you can engage with voters all the time constantly 24/7 and gather their opinions on matters all the time then your government can form bills that while aren't perfect are capable of appeasing most people.

Do I think this'll make the world more democratic? No. It'll just make it so that government's policies and actions aren't controversial and are if anything wildly loved. Dictatorships would still do shady shit, it's just that they would also pump out legislation meant to reduce conflict and increase happiness also.

Tldr; the future of governments is farming for happiness like a mmo player farming for xp.

"Tldr; the future of governments is farming for happiness like a mmo player farming for xp."

What do you think this would this cause the world to shift ideologically? Will it be more leftwing or rightwing? Will it be permanently "center" for efficiency?

NO! wrote:

"Tldr; the future of governments is farming for happiness like a mmo player farming for xp."

What do you think this would this cause the world to shift ideologically? Will it be more leftwing or rightwing? Will it be permanently "center" for efficiency?

Permanently be center for efficiency. Easier to farm for XP if your goal is to appease as much of the entire political spectrum as possible.

Last edited Nov 27, 2017 at 11:32PM EST

NO! wrote:

"Tldr; the future of governments is farming for happiness like a mmo player farming for xp."

What do you think this would this cause the world to shift ideologically? Will it be more leftwing or rightwing? Will it be permanently "center" for efficiency?

it's also possible that there's no ideal ideology that would emerge from that, and that it would essentially feedback loop any ideology into the one that people want. Essentially, there would be no ideological drifting, an ideology would be picked and that would become the standard.

documents1 wrote:

it's also possible that there's no ideal ideology that would emerge from that, and that it would essentially feedback loop any ideology into the one that people want. Essentially, there would be no ideological drifting, an ideology would be picked and that would become the standard.

Possibly. Such a system wouldn't be to create a better system or such, but rather to appease as many people as possible.

documents1 wrote:

@ajqrtz
interesting. What do you think the fate of democracy will be though. There's a lot of theories that suggest democracies exist because investing in people is profitable, and by giving people the vote they run less overall risk of losing power, as mass revolt won't happen. This explains why countries that have oil and can't make as much profit off their people tend to be undemocratic, as their primary resource isn't human labor. But if the masses are no longer profitable as they cannot work, and the government now has the weapons to put down any revolt, can democracy even survive those conditions?

Democracy is doomed in every part of the world where the equality of the individual is not recognized as a cultural value. You cannot export democracy onto soil not prepared to accept the idea that God "created all men equal" and thus you need to treat them as such -- meaning respecting their views and will as expressed at the ballot box. In addition, you cannot export capitalism for much the same reason. For capitalism to work the purchaser and the seller must be view as equals or the seller will take advantage of the buyer any way he or she can. Significantly the success of the Western democracies depend on the continuation of the Reformation emphasis on individual choice (or "individual conscience" as Luther and Calvin termed it). The Reformation put forth the idea that each person could choose his or her path and that we are predominantly equal to the task of doing so. It has taken centuries for this idea to play out and it is still not 100% but we've at least gotten to the point were we recognize when we fall short and are willing to admit where we have done so in the past. Most cultures never get to that point.

In the end democracy will not spread without the spreading of the foundation of democracy as expressed in the Judeo-Christian emphasis on the value of the individual.

AJ

Government may be defined as: "the congregation of will." By this we mean that the will of the people is gathered formally or informally and a smaller subset of wills makes choices for the larger group. Democracy is the attempt to insure that all wills have a voice by giving to each exactly the same power: i.e a single vote. If the system of government, whatever it is, wants to continue in power it must find a way to keep itself in charge of the congregation of will. The form of the government is unimportant except as an expression of how this is done.

To insure the government continues, as Thomas Jefferson noted, you have to provide two things: a sense that what you have will not be taken from you unjustly (security) and a sense that you can get more of whatever it is you value (opportunity). As long as these two are felt by the vast majority of the population there will be a stable government - meaning those who make the choices on the behalf of everyone else- will continue to do so.

Given this, the only thing to determine is whether the world will change enough to destabilize the current Western powers since at this point they form the bedrock of the current global system.

The crash of China will not do it. While their economy is the second largest, it is already overtaxed by corruption. Capitalism and democracy will not work there (see above comments on exporting these things). Thus, they will either have totalitarianism or chaos. Probably chaos in my opinion.

When China goes so will most of SE Asia. India will not be far behind for the same reasons as China -- the corruption is rampant in India too.

Eventually only the West will survive and perhaps their immediate neighbors (Mexico is a BIG if).

I just finished a book: "The Accidental Superpower" by Peter Ziehan. In it the author lays out the foundations of power from a geographic and ideological basis and argues that the US will remain the sole superpower for the next 100 years. It's worth a read though I think he pays too little attention to the general lack of corruption in our institutions (believe me the level of corruption of our officials is unbelievably low compared to most of the world).

In the end then, the US and Europe will continue to develop and others will simply start to fall apart until most of the world will be fighting for the scraps of prosperity. And we will do it because those places have rested their governments on the value of the individual, who, in turn, respects the rights of others to speak and vote as he or she pleases.

AJ

I'm getting a bit into personal opinion, but there's a very good reason why I think the worry about ai is drastically overblown. The reason being is that there is a physical limitation on how much performance you can squeeze out of any computer.

The first law is the law of diminishing returns: with cores in computers yes having more cores means you can carry out more tasks after about splitting it between 50 computer core the benefit of it starts dropping off. That's why we don't have gpu as our cpu, because while that helps with computer games it's shit for everything else.

Another limitation is physical distance. Something like Skynet would be dumb as a rock, the reason being is that while cloud computing does help overall it's shit efficiency wise, which also ties into the law of diminishing returns.

To put that simply: a 16 core cpu with each core being 2ghz performs better than 32 computers with one core 1ghz cpu in them.

Another limitation is there is only so much performance you can squeeze out of a architecture. We don't know how much you can physically squeeze out yet, cause we haven't gotten close to that. However with ai chips that companies are working on it appears that you can squeeze out at least 1000x more performance than traditional cpu and such. With deep learning you can squeeze out even more shit ton of performance, but we don't know yet how much you can squeeze out.

Basically when we do make sapient ai they'll probably be in specialized computers. The only reason for them saving their information on the internet would be to make a backup of themselves.

tldr;
If Skynet was real cutting power to it's central database would probably knock a good 70 IQ off it's intelligence.

Last edited Nov 30, 2017 at 02:21PM EST

First, it has been noted before that when we finally get to AI we will find the Turing Test to be of little use. In that test you are trying to fool humans into believing that the computer with whom they are interacting is actually a human. But one of the ways we know humans are human is by the mistakes they make. And any system that makes mistakes does not fulfill the goal of using AI in the first place -- to eliminate human mistakes.

Second, if by AI you ARE considering that it be humanized to the degree that a robot can take the place of a human without anyone being the wiser, well, consider the following.

1) Humans function in the future.
2) Because we do so an AI system would have to anticipate rather than react.
3) AI systems can anticipate but only on the smallest of scales and not even close to the level of a human.
4) The anticipation would have to be on four dimensions of experience: spacial, social, temporal and linguistic. These are the four dimensions of experience humans anticipate and form feedback loops in to adjust their words and behaviors to achieve stasis.

By "humans function in the future" I means that we anticipate what our environment will be about 1/2 a second before we arrive at that moment. This sense of our environment means we don't scan that environment except to note those things in that environment which cause dissonance.

AI systems, in their current iterations, must scan the entirety of their experience to find the dissonances. They do so in a linear fashion and this takes too much time to process. The advantage is, of course, a higher degree of accuracy in finding the unexpected -- often reducible to what is not what it was the last time it was scanned. In the end an AI system takes too much time because it cannot "sense" it's environment holistically. It must scan each part of its environment to determine things which are out of expected parameters…. and then react. Humans experience their environment is a more holistic manner and from the impression attend only to those parts which are strikingly different than expected. AI systems can do this but only if the area scanned is small enough to keep up with the rate of change.

Humans anticipate the future by expecting it to be in a certain way in a number of ways. They expect the spacial relationship of things to be as anticipated, the temporal rate of change to be as anticipated, the linguistic structures created and received to be within grammatical and syntactical norms, and their relationship with those around them to continue in prescribe and appropriate channels. In all this humans live about 1/2 second in the future.

This future second is therefore what AI would need to anticipate in all it's complexity in order to act like a human. But AI systems are functionally incapable of this on any sort of a large scale because they are limited to scan and compare methods of interpretation. They are not intuitive and cannot be so because their core processes are linear and systematic. And no amount of computing power will get them to get past this since the limits of speed per flop are pretty close to as high as they can theoretically go.

This does not mean that AI will not ever occur under a very limited range of experience. The amount of computing power, for instance, it takes to make a decent Chess player, can be had even if it is enormous. But to create computer which can interact socially, spacially, temporally and linguistically a half second into the future is, at this point, pretty much outside the reach of humankind.

AJ

@smith
Speaking of which a better test to tell if a ai is sapient is slowly being worked on. Rather it's fourteen tests wrapped up into one that could determine if a ai had capacity similar to a human's.

Right now a some ai comprehension of speech is close to a lot of people's; however many other fields it is obviously lacking.

Regarding the whole AI thing, this tweet shows that the rise of AI could be the fall of the credence of pretty much all pictoral and video media. The legal system might need to be massively updated to fight the sudden accessibility to high quality forgeries, not to mention the possible capabilities for censorship and unperson-ing. With this and how AI is going to suddenly shift the entire job market to require much more tech-related jobs, Ludditism is going get much more popular in the future.

@UltimusDraco
"Ludditism is going get much more popular in the future."
In a lot of countries in the world neo-ludditism is VERY popular to the point that a lot of countries refuse to use combine harvesters for harvesting wheat. Obviously a lot of countries can't afford that kind of machinery, but some countries use sickles to harvest because it's been their tradition for hundreds of years.

The USA on the other hand neo-ludditism is highly unpopular.
"No it's legit popular! See these millions of ip addresses totally not from other countries?"
In the USA even the Amish are starting to use smartphones.

The state of the luddite movement in the USA summarized in one image:

The neo-luddite and luddite movement in the USA is more or less on it's final last few heartbeats.

Last edited Dec 05, 2017 at 08:09PM EST

YourHigherBrainFunctions wrote:

@UltimusDraco
"Ludditism is going get much more popular in the future."
In a lot of countries in the world neo-ludditism is VERY popular to the point that a lot of countries refuse to use combine harvesters for harvesting wheat. Obviously a lot of countries can't afford that kind of machinery, but some countries use sickles to harvest because it's been their tradition for hundreds of years.

The USA on the other hand neo-ludditism is highly unpopular.
"No it's legit popular! See these millions of ip addresses totally not from other countries?"
In the USA even the Amish are starting to use smartphones.

The state of the luddite movement in the USA summarized in one image:

The neo-luddite and luddite movement in the USA is more or less on it's final last few heartbeats.

Didn't know about all that actually, thank you. But on the other subject, with AIs now making perfect forgeries of photos and videos and so on, it has many worrying implications. When this kind of technology becomes more widespread and common, people won't be able to look a simple photo of the outdoors without thinking "Is this fake?" The way governments will respond to this will be… interesting to say the least, as this kind of technology is basically a Stalinist's dream come true.

Just another idea about AI and the future of the world. As the cost of living rises it is necessary for the cost of wages to also rise. As the production of AI systems increases they become a commodity to the point where the cost of an AI system and it's programming becomes cheaper than using people. As one person noted, some governments will continue to use people, but to lead the world in any industry is a measure of productivity and AI will out-produce humans once the AI becomes capable at whatever the human does or was doing.

Now, since it is necessary to raise the cost of living (i.e. wages) the degree we raise them will predict how fast the lower rung of jobs is replaced by AI systems. If AI then replace most of the workers at the bottom it will be necessary for the skill set of the average worker to be higher than it was before. Those who will have the right skills will have jobs, those who do not, will be on whatever system of welfare might be in place or out on the streets.

All this means is that as wages are raised AI will increase match the skills of those paid those wages so that they need not be paid at all. And since the intelligence of the average person is at one point there will be an increase in the "unemployable." Eventually their will be more unemployed than employed and the taxes on those employed will need to be astronomical. At that point it would seem natural for there to be a move to restrict the rights of the unemployable so that they can be relegated to some restricted part of society (i.e. either economically restricted or geographically so). This will result in half the population having the right vote and the other subsisting on the minimum to keep them alive (and this is an optimistic view, btw).

There are two great impulses in Western politics which will speed this up. The "living wage" movement, and population control. I'll take them in order.

First, it is patently obvious if you think about it, that raising the minimum wage by x percent will raise the entire wage scale by the same. If a worker currently getting the "minimum wage" in the US ($7.85/hr in some places, as high as $15.00/hr in some cities since local prevailing wages standards take precedence over Federal standards), gets a $2.00 increase, the worker who, at the same place, is getting $8.85/hr is getting that $8.85/hr because he or she earned the increase by having better skills (or more longevity at least). Since that second worker has already been recognized as being worth more than the "minimum wage" worker, do you think they should not continue to receive that "improved skills" bonus? Did they suddenly get fewer skills? No, they will want the same $2/hr increase. And the one currently making $2/hr more than minimum wage will want a wage adjusted to still be $2/hr over the new minimum wage. In other words, any increase in a workers wages were given because of either increased skill or longevity and whatever you raise the lowest rung the others must follow in step. A 30% raise in the minimum wage is a 30% raise in the entire wage scale.

If you raise the wages by 30% you decrease the relative cost of developing an AI system to take the place of any worker whose old wage was BELOW that of the new minimum wage. If a company could develop an AI system to replace all the workers whose old wages were below the new minimum wage they would not need to raise anybody's wages -- a HUGE benefit over their competitors. OR if they could could reduce the number of workers by automation enough to offset the increased salary's that too would give them a huge benefit over their competition. Raising the minimum wage only results in a net loss of jobs in the long run as the skill set of the lowest tier workers can be done by machines.

Second, if you have population control you reduce the number of people who are available to work at that bottom level. It may appear counter-intuitive to think that more people would mean less AI, but if you let market forces go the more people you have the more competition for the lowest paid jobs there would be -- thus keeping down or even driving down the cost of wages at the bottom. This would, in turn, keep AI at bay for the most part and is a major reason some countries have not automated -- they have the population to do the work manually even if it means that a good portion of their people live at near starvation wages.

If the world remains relatively stable for another 50 years and the population stagnates we will have two classes: the wealthy and the poor. The wealthy will feed the poor at subsistence levels and they will have no say in the matter, and the poor will do nothing as their will be nothing for them to do. AI will be doing everything it can.

AJ

I'm going to make a prediction:
I think a hundred years from now there is a 100% chance that there will be different subspecies of humans. The question is how many. Will there just be a couple of subspecies or a lot?

YourHigherBrainFunctions wrote:

I'm going to make a prediction:
I think a hundred years from now there is a 100% chance that there will be different subspecies of humans. The question is how many. Will there just be a couple of subspecies or a lot?

This will get VERY speculative but I would say there would be as many as there needs to be, for example dogs? They have different species focused on the different abilities and uses a dog could have. So applying the same to humans there would be a great focus on "intelligence" in its different forms. Overall I would say there would eventually be five species running around instead of hundreds based on different types of mental talents: higher charisma and/or some kind of hivemind/connection to the web socially, a species that combines AI and human creativity and has genes related to higher artistry, the typical logical intelligent trans-human archetype, A species that can think fast and move fast, and of course the jack of all trades which would be the closest to modern humans. That is of course if we keep trans-humanism egalitarian, if not then it would be like GATTACA where there is an oppressive caste system but much less stable because there would be a constant rebellion like in Deus Ex.

YourHigherBrainFunctions wrote:

I'm going to make a prediction:
I think a hundred years from now there is a 100% chance that there will be different subspecies of humans. The question is how many. Will there just be a couple of subspecies or a lot?

Wait, aren't there already many different subspecies of humans? From Wikipedia:

"When geographically separate populations of a species exhibit recognizable phenotypic differences, biologists may identify these as separate subspecies"

"geographically separate populations of a species exhibiting recognizable phenotypic differences", you wouldn't argue that this doesn't apply to modern humans, right?

@memciki memosiki

current scientific consensus is that there is one human subspecies, homo sapiens sapiens. subspecies can often be very arbitrary in biology though, for example lions, tigers, rhino, elephants etc are not given so many subspecies because they are particularly diverse animals, but because more subspecies means that each can be marketed for saving from extinction, thus saving more animals than otherwise. However, subspecies are supposed to be groups that tend to share certain genetic differences as well, we just don't have most animals sequenced, so it can be argued that subspecies in other animals are more arbitrary because we haven't sequenced their genome and local population tendencies.

In addition, humans have never really been geographically isolated, in general you don't see a population spread into a new region and then all humans that lived between their origin and the new region die off, humans stayed around in every place they moved to. There is some evidence the americas were isolated, but the people there didn't diverge too far genetically and there's evidence the inuit and chukchi in alaska and siberia had contact since ancient times.

The reason humans are considered one subspecies is because the divisions in genetics don't split upon group lines. Like if there were subspecies that tended to breed mostly with themselves, you'd expect most genes to diverge from the other subspecies through genetic drift. Instead though we find that genes split east west, north south, and all sorts of other divisions. In addition, most of these divisions aren't clearcut, they blur into each other, as a genetic cline ( https://en.wikipedia.org/wiki/Cline_(biology) ). thus while humans do have different genetics in different regions, the human tendency to classify into groups just isn't useful in biology for human classification.

Last edited Dec 11, 2017 at 02:33PM EST

I don't know, I looked more into it, and it always comes down to "geography + phenotype + the ability to interbreed freely" and this is perfectly attributable to humans. Many sources point that in contrast to species, designation of subspecies is always uncertain and subjective. The way I see it, the only reason why different subspecies of humans are not recognized is because of politics.

NO! wrote:

This will get VERY speculative but I would say there would be as many as there needs to be, for example dogs? They have different species focused on the different abilities and uses a dog could have. So applying the same to humans there would be a great focus on "intelligence" in its different forms. Overall I would say there would eventually be five species running around instead of hundreds based on different types of mental talents: higher charisma and/or some kind of hivemind/connection to the web socially, a species that combines AI and human creativity and has genes related to higher artistry, the typical logical intelligent trans-human archetype, A species that can think fast and move fast, and of course the jack of all trades which would be the closest to modern humans. That is of course if we keep trans-humanism egalitarian, if not then it would be like GATTACA where there is an oppressive caste system but much less stable because there would be a constant rebellion like in Deus Ex.

Pretty much what I was thinking.

Chances are a lot of people would prefer to only get gene engineering or prosthetics if they need to; such if they get cancer. Those people will probably look completely human and will probably share a lot of our genes.

Like you said there probably will be a group of people who love cybernetics and that. Some of them probably might go so far as to be a brain in a box; chances are most would prefer to look humanoid.

There will probably be a subspecies of people that love gene engineering. Some of them will go nuts with it and do crazy shit like the people who want to be furries.

There probably will be "artificial persons". When we develop intelligent ai chances are some of them will try and be more human like.

There will probably be ai that don't care about trying to be more humanlike. There probably will be some people who want to become full robot.

I do think someone will "accidentally" make animal-human hybrids still. Why? Some countries are massive assholes that don't even think their own people are human and treat them like animals. Asshole, "I made a super intelligent dog with thumbs to be used for labor since you guys kept bitching at us for having extremely poor working conditions for human workers" Rest of the world, "WTF?!BBQ HOLY FUCKING SHIT IS THAT IMMORAL!" Asshole, "I don't see what's so wrong about it. Next you're going to tell me that peasants aren't subhuman trash to be used for slave labor and starve them to death" Rest of the world, "What the fuck is the matter with you?!"

Last edited Dec 11, 2017 at 03:34PM EST

Hey guys! I'd like to say that I think that we humans have quite an exciting future ahead of us. I think we should keep our expectations calm, however. I can see how innovations in genetic engineering will continue to revolutionize agriculture and medicine, not to mention the introduction of synthetic meats into the market. I do not see a dystopia with humanity fractured between different modified peoples. The biggest change in humanity, I think, would be the technological singularity plus generalized AI. I don't think mind uploading will be possible, but I agree that advancements in prosthetics to the point of cyborg-esque augmentations is probable. I sincerely believe that the singularity would revolutionize this planet economically, commercially, politically, socially, and environmentally. If anyone cares, I'd be glad to go into that more.
Also, @memosiki, it is true that humanity can be split into thousands of genetically unique populations. But these are called "demes," and there are so so SO many more of them than the 3-5 races/subspecies model (in biology, race and subspecies are sometimes but not always used interchangeably. This why we say "One race, the human race" because we are all Homo sapiens sapiens) that the race realists try to push. In population genetics, we see that all demes, the true "races," are very impermanent and have each only existed for a few centuries. This is because populations constantly change due to genetic drift, emigration, and immigration. I have to stop here, I could go on forever and I don't want to derail the thread any further. PM me if you have anything to say, please.

@MrBTheAdventurer I have always found the singularity fascinating (if terrifying), so how do you think the world would end up? More democratic or a technocracy? A socialist utopia or a post cyberpunk capitalist world? I am guessing a good portion of the population will live permanently on the internet, maybe through VR or uploading. I am also guessing governments will try to integrate Basic Income with varying success and that will be one of the main issues between the left and the right in the following years. I am kiind of for basic income in the future, as in they should try it in a controlled way and see the results, social experiments can be important and it is a potential solution.

boky0102 wrote:

Hey guys, if someone is interested in our future and has some time to watch TV series, i would suggest you to check Black Mirror, these series shows the black side of mans future. It makes you wonder !

You do know that show is satire right?

Skeletor-sm

This thread is closed to new posts.

Old threads normally auto-close after 30 days of inactivity.

Why don't you start a new thread instead?

Hauu! You must login or signup first!