How are we using AI at the Courier, and what are the future plans?

A diagram of a microchip with the letters "AI" in the center

By Larry Felton Johnson

Artificial Intelligence is both controversial and fast-developing.

In my role as editor and publisher at the Cobb County Courier, I have to develop a clear set of policies on when AI should be used and when it should be avoided.

One thing is certain: artificial intelligence in business, industry, and media is here to stay. Denying its place in the future is like the crowds standing alongside an automobile in the 1890s shouting, “Get a horse!” (Whether or not early motorists should have gotten a horse, or a bicycle, is a subject for another article).

We can shout all we want, but AI isn’t going away. Businesses (including the Cobb County Courier) must learn about it, and we’d be foolish not to use it where it makes sense.

I’ve so far limited my use to things where I know I haven’t crossed ethical lines or created crappy content. I’ve primarily been using ChatGPT from OpenAI. It’s the dominant tool in AI at the moment. But I’ve also begun experimenting with Google’s NotebookLM.

Here are a few things I use artificial intelligence for daily:

  • Brainstorming headlines: I feed ChatGPT an article and ask it for 20 proposed headlines. If one seems close but not quite usable, I rewrite it. It saves an amazing amount of time per week, and the headlines have been better.
  • Reformatting the list of counties the National Weather Service includes with alerts, alphabetizing the county names when the NWS doesn’t, and separating the counties by commas rather than the dashes they use
  • Asking ChatGPT to take technical definitions and turn them into a more human-understandable form. A good example is also from the National Weather Service. Sometimes the definitions on the NWS website are highly technical. I instruct ChatGPT, “Put this in plain English.” I then carefully compare the two to see if I think the two definitions mean the same thing
  • Brainstorming hashtags to use on social media
  • Shortening paragraphs to use as descriptions of what’s in the article

Now, I’m ready to branch out and make heavier (but still careful) use of AI in my daily work. This doesn’t mean I’ll replace human reporters with AI. The reporters are the backbone of our coverage here, and always will be.

This regular column will highlight some of the experiments I’m running. Just because I’ve included one in these columns doesn’t mean I’ll start doing it regularly. This is where I’ll show readers our experiments in AI.

Below are two examples.

A short podcast

This is a short AI-generated podcast created with Google’s NotebookLM.

All I did was feed it the article we ran on free COVID and flu vaccines at the Switzer Library, and here is the short podcast it created:

You can compare it to the information in our original article by following this link.

Overall I think it did a good job. If I regularly incorporate this feature I’ll probably use it inside longer podcasts with short real-world interviews, and give the AI characters clever names that make it clear the voices are AI-generated.

Asking ChatGPT to write an explanatory article

I mentioned above that I’m exploring the ethical issues involved in the use of AI.

So I decided to ask an AI tool itself to write a 700-word essay on the ethical issues in the use of AI.

Here’s what ChatGPT created (the remainder of this overall article, including the Conclusion section is the text created by ChatGPT):

The Ethical Issues and Controversies Surrounding Artificial Intelligence

Artificial intelligence (AI) is transforming industries, from healthcare and education to transportation and entertainment, bringing immense benefits. However, this rapid advancement raises complex ethical issues and controversies that society must address to ensure AI is developed and used responsibly. From concerns about bias and privacy to questions about accountability and job displacement, the ethical implications of AI technology demand careful consideration.

1. Algorithmic Bias and Discrimination

One of the most pressing ethical issues with AI is algorithmic bias. AI systems learn from data, and if that data reflects societal biases, the AI can perpetuate or even amplify them. For example, facial recognition software has been found to perform less accurately on individuals with darker skin tones, raising concerns about fairness and equity. Similarly, AI algorithms used in hiring processes have inadvertently disadvantaged certain groups based on gender or ethnicity.

The root cause often lies in the data used to train AI systems, which can reflect historical inequalities or biased societal structures. This has led to calls for transparency in AI development, rigorous testing for bias, and the inclusion of diverse datasets. Organizations must also adopt ethical guidelines to ensure fairness in AI applications.


2. Privacy Concerns

AI’s ability to analyze vast amounts of data raises significant privacy concerns. From smart assistants to social media platforms, AI systems often rely on personal data to function effectively. While this can enhance user experience, it also creates opportunities for misuse. For example, AI-powered surveillance systems can monitor individuals’ movements, raising fears of government overreach and erosion of civil liberties.

The rise of deep learning models capable of reconstructing personal details from anonymized data has further intensified privacy debates. Policymakers and technologists must strike a balance between leveraging AI for societal benefits and protecting individuals’ rights to privacy.


3. Lack of Accountability

Another ethical challenge is determining accountability for AI decisions, particularly in high-stakes situations. In areas like autonomous driving or medical diagnostics, the consequences of AI errors can be severe. However, attributing responsibility is often unclear. Should blame lie with the developers, the users, or the AI itself?

The concept of “black box” AI—where the decision-making process is so complex that it becomes opaque even to its creators—compounds this problem. Efforts to enhance explainability and transparency in AI are crucial, enabling users and regulators to understand how decisions are made and who is accountable when things go wrong.


4. Job Displacement and Economic Inequality

The automation enabled by AI poses significant risks to employment. Industries such as manufacturing, retail, and transportation are already experiencing shifts as machines take over repetitive or labor-intensive tasks. While AI can create new job opportunities, the transition is not always equitable, and workers in vulnerable sectors may face prolonged unemployment.

Furthermore, the benefits of AI tend to concentrate among a small group of companies and individuals, exacerbating economic inequality. Governments and businesses must explore strategies such as upskilling programs, universal basic income, or rethinking tax policies to address the societal impacts of AI-driven job displacement.


5. Ethical Use in Warfare

AI’s role in warfare presents one of the most contentious ethical debates. Autonomous weapons, often referred to as “killer robots,” raise profound moral questions. These systems can make life-and-death decisions without human intervention, potentially reducing accountability in warfare and lowering the threshold for conflict.

Opponents argue that AI-driven weapons systems lack the moral judgment required in combat and could lead to devastating consequences if misused or hacked. Efforts to regulate AI in military applications through international agreements have faced resistance, leaving a critical gap in governance.


6. Deepfakes and Misinformation

The ability of AI to generate hyper-realistic images, videos, and audio—commonly referred to as deepfakes—has sparked concerns about misinformation and social manipulation. Deepfakes can be used to create false narratives, influence public opinion, or undermine trust in legitimate media.

Governments, technology companies, and researchers are working to develop detection tools and establish regulations to combat the misuse of this technology. However, the rapid advancement of AI capabilities makes this an ongoing challenge.


7. Ethical Decision-Making in AI

AI systems are increasingly tasked with making ethical decisions, such as prioritizing patients in healthcare or making split-second choices in autonomous vehicles. This raises the question: whose moral framework should guide AI? Different cultures and societies have varying ethical standards, making it difficult to develop universally accepted AI norms.

Some argue that embedding ethical frameworks into AI requires diverse, multidisciplinary collaboration that includes ethicists, sociologists, and technologists. Ensuring inclusivity in AI design can help address these challenges and align AI systems with societal values.


8. Regulatory and Governance Challenges

The rapid pace of AI innovation often outstrips the development of regulatory frameworks, creating a gap in governance. While some countries have introduced AI ethics guidelines, global standards remain fragmented. The absence of comprehensive regulation increases the risk of misuse and unintended consequences.

International cooperation is critical to establish shared principles for ethical AI development. Frameworks like the European Union’s AI Act aim to address issues such as bias, transparency, and accountability, providing a model for others to follow.


Conclusion

AI’s potential to improve lives is immense, but its ethical challenges cannot be ignored. Addressing issues such as bias, privacy, accountability, and economic inequality requires a concerted effort from governments, businesses, and civil society. By prioritizing ethical considerations and fostering global collaboration, society can harness the transformative power of AI while minimizing its risks and ensuring its benefits are equitably distributed.

Be the first to comment on "How are we using AI at the Courier, and what are the future plans?"

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.