Philosophy of Technology

Technology, in its broadest sense, means “the way we do things”  – which can be manual, mechanical, and electronic.  What does the way we do things say about the evolution of knowledge, and of our perspectives and priorities, over time?  Does our technology reveal the mindset of its inventors, and those who use it?

Technology is the product of human knowledge, applied to a particular purpose.  The applications of our technology change over time, but why do they change and what are the causes of our inventions?  Are there times when we adapt to our technology, and other times when our technology adapts to us?  Who – or what – determines the future course of the way we do things, and how do the rest of us have a say in the matter?  Join in this ongoing dialogue as we explore the frontiers of technology from the combined perspectives of its developers and its users.

Latest Philosophy of Technology

  • Protecting the Data of You and Me: Risks of DNA Hacking and Genomic Data Breaches Prompt Calls for Cyber-Biosecurity

    The theft in 2023 of 6.9 million genetic profiles from ancestry-matching company 23andMe, and the company’s sale by auction of 15 million DNA profiles, highlight the bio-security risks to our most personal data: our DNA. Of interest to blackmailers and fraudsters, hacking the complex DNA structure can also be used to transmit malware in synthetic DNA, raising the urgency for legal and cyber-security measures to protect our DNA.

  • artificial-intelligence

    An Increasingly Digital Future Raises Urgency for Rooting Out Algorithmic Biases in Software Development

    Drawing its initial concerns from biases in facial recognition algorithms, the Algorithmic Justice League wants the world “to remember that who codes matters, how we code matters, and that we can code a better future.” Instead of waiting to fix biases in algorithms already in use, the AJL wants to root out problems before applications hit the market by focussing on the process of coding of software in the first place. The approach is encouraging when we believe we can, in fact, code a better future.

  • ‘Godfather of AI’ Proposes New Type of Neural Network to Guard Against LLM Deceit and Self-Preservation

    Large Language Models like ChatGPT and Claude are showing alarming tendencies for self-preservation and deceiving humans. Yoshua Bengio, whose insights were key to developing artificial neural networks and machine learning, is calling for the creation of what he calls “Scientist AI.” It’s a different AI, unable to act as a human agent like other AIs, but the impartiality of Scientist AI could make it far more powerful and beneficial for all of humanity.

  • A Deep Dive Into Machine Superintelligence: Why are Companies Racing for It, and What Would Motivate a Machine that Outsmarts the Brain?

    The biggest tech companies and many brilliant minds are in a heated race to give birth to machine superintelligence. Betting on huge returns, investors are funding the massive cost of chips and electricity to train the artificial neural networks behind ChatGPT and other LLMs, but impressive results still fall short of the goal. We look at the state of the race, the resources it consumes, and serious issues in machine learning that are placing new obstacles on the road to outsmarting the human brain.

  • Rise of Virtual Reality Tech Increases Risks of Entering AI’s Third Dimension, and the Need for Immersive Rights

    Artificial intelligence is fuelling a rapid increase in the life-like realism of virtual reality technologies. As we expose our senses to AI’s power in three dimensions, the lines between real and fake are blurring, increasing the potential for manipulation and posing a special risk to youth. As regulators race to catch up with the rate of technological change, a strong case for immersive rights is emerging to protect consumers and the many potential benefits of virtual reality.

  • The Case for Cyclic Neural Networks: Could Circular Data Mimic Biological Intelligence and Improve Machine Learning?

    Artificial neural networks powering large language models like ChatGPT connect data sequences in straight lines: for example, A leads to B which leads to C. But real-life data relationships aren’t always linear, and biological intelligences connect the dots and weigh probabilities in many different ways. Cyclic neural networks hold promise for capturing and interpreting data more naturally in circles rather than lines, improving the reliability of their predictions as they feed on huge amounts of information synthesized from countless human and machine sources. As we increasingly rely on LLMs and AI agents to weigh probabilities for us, fully connecting the dots is crucial.

The Quantum Record is a non-profit journal of philosophy, science, technology, and time. The potential of the future is in the human mind and heart, and in the common ground that we all share on the road to tomorrow. Promoting reflection, discussion, and imagination, The Quantum Record highlights the good work of good people and aims to join many perspectives in shaping the best possible time to come. We would love to stay in touch with you, and add your voice to the dialogue.

Join Our Community