Author: Evan Fife

  • Stratechery on Why the US Investment in Intel is a Good Idea

    U.S. Intel – Stratechery

    Coming from a non-technical background, I really don’t like the idea of the US government investing in intel. Thompson frames the decision differently, making the argument that for the sake of national security Intel needs to survive. It has a lot of problems and has been mismanaged and misdirected, but giving up on them won’t open the space for a new competitor to arrive. If anything, letting the market take its course would just relegate the US to chip (and therefore geopolitical) inferiority. I highly recommend reading this.

  • The Effect of AI on Entry Level Workers

    Full Report from Stanford
    WSJ Article


    This feels a little concerning as someone who very soon will be looking for an entry level job, but the findings reinforce my view that projects and exploiting opportunities to gain experience are as important as classroom learning.

  • How to Make Doing Hard Things Easier Than Scrolling Youtube

    Video Link – Newel of Knowledge

    Fairly interesting video about the psychology of doing hard things. He had a pretty interesting perspective on the maximum amount of dopamine our brains can produce. Opting for activities that produce dopamine quickly (unhealthy foods, tv, youtube, skimming articles, etc.) reduces our ability to do activities that produce slow dopamine (exercise, deep work, real connection, etc.).

  • What if AI doesn’t get much better than this?

    https://calnewport.com/what-if-ai-doesnt-get-much-better-than-this/#more-16650

    Cal Newport is a great writer, and his insights on to why AI coudl be plateauing for the time being are insightful. AI is good at being general, but it is crowdsourcing it’s knowledge, so until we figure out a way to teach AI to reason and judge whether the knowledge that it gets is true (i.e. until we figure out a new way to train AI) progress is likely to stagnate.

  • How to Use AI Deep Research

    https://substack.com/@torstenw/p-160819332

    An interesting read on best practices when dealing with AI research, treating AI more like a tool or an intern rather than a professional that implicitly knows everything. Learning how to develop good prompts is key to leveraging AI.

  • David Brooks on Audacity, AI, and the American Psyche

    https://conversationswithtyler.com/episodes/david-brooks-2/

    Super interesting read on the importance of literature, on what AI can and can’t do, and mentorship and career growth in journalism

  • The Road

    Cormac McCarthy’s the road is the story of a father and his son as they struggle to survive in a post-apocalyptic wasteland. In reading it, I struggled to see the purpose in their struggle. The father and the son come close to death many times, and yet, after each escape they seem no closer to living. Their journey to salvation is a disappointment, ending the way things must: in the end. The father succeeds in protecting his son for as long as he lives, but no further than that.

    The story’s commentary on hope was the most compelling to me. Despite living in a world where a violent death was the only certainty, the father refused to give into the dark and gloomy reality and future that surrounded him. He chose to pass on to his son the idea that goodness still existed, that life was worth living, and that hope, even if it was for the sake of hope, meant something. The difference between the world of the father and his son differs from our own only in duration. A hope that initially seems irrational becomes a type of the hope that we ourselves should have.

    I really enjoyed this book, especially the style. The prose was often dry and spartan, but every once in a while McCarthy would flex his symbolic muscles and write a passage that was strikingly beautiful. In the style came the biggest expression of hope: while through most of the book the prose bores and disheartens, occasionally a ray of hope would shine and brighten the pages that followed with meaning.

  • The Word Made LIfeless

    Link to Article on the Hedgehog Review

    This article gave a super interesting take on the dangers of AI and how even though it seems to be mastering words, it is mastering the wrong kind. The author delves into a discussion of the platonic idea of the “logos” and describes how the word in both ancient greek and biblical sources tends to point towards a practical and a teleological meaning. The practical meaning being that the logos can be used as rhetoric to reach an end, or we can use language to discover what the logos means.

    Throughout the piece the author describes how the act of choosing words allows us to give meaning to our thoughts. In the struggle to find the right words, we determine what our thoughts actually are and come into contact with the truth. Outsourcing our word choice to AI robs us of this experience, giving us words with no meaning and no emotion. In effect, the purpose and feeling we get throuhg the Word is made lifeless.

  • SuperIntelligence and AI

    Mark Zuckerberg Says ‘Superintelligence’ Is Imminent. What Is It?

    Mark and many others have correctly identified this new era as just a continuation of history. They often miss an important detail: History shows that newfound productivity is often a by-product of progress, not a driver of it. Progress is driven by the same thing that has always pushed us forward: curiosity.

    Every new era has been ushered into being by curious people looking for secrets that open our understanding of the world. When we get those answers, we ask more questions. It is safe to say that the most powerful use of AI will always be to expand curiosity. And the curious will inherit the world, because they always have.

    Superintelligence is an interesting concept, but I agree with this position. Creativity and curiosity are the forces that drive innovation, and the jury is still out on whether AI can truly be creative.

    The perspectives in this article are varied and interesting. A very good read

  • Being Effectually Minded

    Goal Induced Blindness from Farnam Street

    Uncertainty is one of our least favorite conditions, we hate the feeling of not knowing what to do, and we dislike the frustration that comes from not having anything to really do. It leads us to believe that those who are successful are because they had the right goals from the beginning and were able to develop the skills necessary to execute perfectly on those goals. A successful entrepreneur was successful because their idea was so good and their drive was so intense that nothing could stop them from the success that they wanted. The truth is that the key to success is adaptability, or what Oliver Burkeman calls being “effectually minded”.

    Essentially, an effectually minded person has two main characteristics, they care about finding a goal that matches their abilities and they practice positive catastrophizing. Rather than finding a goal or an idea and doing everything that they can to become the person or build the relationships to reach a certain goal, they take the abilities and knowledge that they already have and find a goal or an idea that can be achieved using that. Rather than looking at (or ignoring) what they lack in the face of a lofty goal, they take what they have and find a goal or problems that can be solved.

    The other half is what the Stoics call positive catastrophizing, which is a lot sunnier than it sounds. While a lot of the time we might think of the end goal and the rewards that will come with it, an effectually minded person will instead choose to think about the cost. Instead of asking themselves how big the payoff will be at every step, they ask themselves the cost of failure for each action. Which allows them to realize that often the cost of failure is much less than catastrophic.