Volume 2 - 8/17, 8/18

Understanding SPACs: The High-Risk, High-Reward Path to Going Public

By: Christopher E. and Felipe Q. (Rye Country Day School AI Club - Rye, NY)

Click to expand/collapse article

So, what on earth is a SPAC? A SPAC, or Special Purpose Acquisition Company, is typically a shell company, meaning it lacks business operations, revenue, or products. Sponsors/investors create the SPAC and file an initial registration statement (S-1) with the U.S. Securities and Exchange Commission (SEC). The SPAC goes public through an IPO, listing its share prices at approximately $10. For a maximum of 24 months, the SPAC can negotiate a merger with a private company that wants to go public. After the merger is complete, the company is no longer a shell company but rather a public company whose "shell" has been filled by the strong private company. Depending on the initial evaluation of the company, investors can make a lot of money (increased share price) or tank the company by withdrawing their investment and receiving a refund, or by holding their shares through the merger and shorting the company later if they expect the share price to fall.

Why might a company be interested in merging or being acquired by a SPAC as opposed to traditional IPOs? Well, there are several reasons, such as the potential for more efficiency, reduced risk in terms of market volatility, greater capital raised once public, and easier access to large corporations & high-profile investors. However, underlying risks surrounding SPACs provide another, more cautious angle into M&A. If SPACs fail to merge or buy the target company within the 18-24 month time limit, then the SPAC disbands and is forced to reimburse the credited investors who bought its shares, leaving the founders down millions. Even if the SPAC does find a suitable company to fill its "shell", SPACs have historically been associated with scams, which leads to trust issues, a bad reputation, and a lower ROI for its investors. For this reason, smaller companies with fewer resources tend to go public through SPACs, whereas larger, more well-known companies often opt for the traditional IPO method.

While the timeframe between the creation of a SPAC and its merger with a target company is relatively short (usually about 12-18 months), there are several steps needed in order to achieve this goal. By understanding the inner workings of SPACs, specifically the process of raising capital and taking a company public, it becomes easier to understand their effect on the stock market. The first step is to organize a small group of sponsors and investors to create the SPAC and file an initial registration statement (S-1) with the U.S. Securities and Exchange Commission (SEC). This small management team, typically composed of a CEO, CFO, and other board members, takes the SPAC public through an initial public offering (IPO) in order to raise capital, and all funds raised are placed in a trust. Once public, the SPAC is traded, and individuals, most commonly accredited investors, buy shares for around ~$10. The allure for investors to buy shares of SPACs is that they can trade the SPAC shares without fear because if they believe the SPAC is going to fail, they can request a refund for each share of the SPAC + any interest accumulated over that period. Now that the SPAC is public and has raised millions of dollars in capital, it has 18-24 months to negotiate an acquisition with a private company that's looking to go public. The SPAC is a shell company, often referred to as a "blank check company", and it has the incentive to "fill" the space by going to companies and, to put it bluntly, say, "We are a publicly traded company with access to millions of dollars upfront. You're looking to go public, but aren't big enough to do so on your own through a messy IPO. Let's make this quick and merge so that you can go public without the risk of an IPO, while getting the benefits of injecting shares and value into your company - depending on the initial evaluation of our merger." Simple enough.

While SPACs may be the best option for a smaller, less connected company, larger companies will often avoid SPACs, given that they don't see the incentive for themselves as compared to a smaller company. Now, after the merger is complete, the company is no longer a shell company but rather a public company whose "shell" has been filled by the strong private company. If the price of the merged company skyrockets, then the company will reap the benefits. If not, then the SPAC has failed and investors are reimbursed their initial investments, leaving the SPAC's initial management team down millions.

While SPACs can mitigate the risk of a private company going public, once the merger is completed, the success of a SPAC is almost entirely dependent on the market and prone to investor volatility. For example, iLearningEngine's merger with Arrowroot Acquisition group resulted in a $1.4 billion initial evaluation; however, the stock began to decline rapidly due to allegations from Hindenburg Research LLC, ultimately filing for bankruptcy a few months later (iLearningEngine Faces Nasdaq Delisting after Bankruptcy Filing, 2024).

Although it may seem that SPACs are all gloom and doom, some SPACs are very successful. For example, Draft King's merger with Diamond Eagle Acquisition Group (DEAG) resulted in an initial expected evaluation of $2.7 - $3.3 billion after going public under the ticker DKNG on April 24th, 2020 (Where Are They Now? – DraftKings|SPACInsider, n.d.). One year after the merger, DKNG skyrocketed to well over $6 billion one year after the merger. Since the DEAG's IPO, the stock has seen a return of over 330% as of August 2025 (Where Are They Now? – DraftKings|SPACInsider, n.d.). In other words, the success of Draft Kings and failure of iLearning Engines reveals that it is important to consider that SPACs are two-edged swords.

Hundreds of companies go public each year, each having significant repercussions on the stock market and affecting millions of investors. By understanding the inner workings of Special Purpose Acquisition Companies, investors can make educated decisions on where and when to invest their money. As AI in finance continues to make rapid advancements, newfound investing strategies, market analysis, and research tools, all powered by AI, will be at the forefront of stock market success. In order to stay ahead of the curve, investors have to understand the roots of basic M&A, specifically SPACs, so they can use the newfound tools AI will create to succeed in the ever-evolving stock market.

Citations

iLearningEngines faces Nasdaq delisting after bankruptcy filing. (2024, December 27). Investing.com Canada; Investing.com. https://ca.investing.com/news/sec-filings/ilearningengines-faces-nasdaq-delisting-after-bankruptcy-filing-93CH-3768601

DKNG. (2024). Nasdaq.com. https://www.nasdaq.com/market-activity/stocks/dkng

Where Are They Now? – DraftKings|SPACInsider. (n.d.). New.spacinsider.com. https://www.spacinsider.com/news/nick-clayton/where-are-they-now-draftkings-sbtech

Glasner, J. (2023, May 18). A Bunch Of AI-Related Companies Are Going Public Via SPAC. Crunchbase News. https://news.crunchbase.com/public/ai-companies-spacs-ilearning/

Young, J. (2020, November 24). Special Purpose Acquisition Company (SPAC). Investopedia. https://www.investopedia.com/terms/s/spac.asp

ChatGPT Usage: Refining ideas, explaining ideas, fact checking the article, and editing the final article - August 9th, 2025, August 10th, 2025, August 17th, 2025

Grammarly Usage: Editing the article - August 17th, 2025

A Review on TensorFlow Using Existing Sources

By: Felipe Q. (Rye Country Day School AI Club - Rye, NY)

Click to expand/collapse article

This document, prepared by the RCDS AI Club, compiles educational insights on deep learning techniques, primarily sourced from the Medium article Mastering Keras: Unleashing the Power of TensorFlow 2.0 with 3 Different Model Creation Techniques by Sanjay Dutta (2024). Significant sections of the content, including direct quotations and closely paraphrased material, are derived from this article and are cited throughout the text (e.g., (Dutta, 2024)). Direct quotes are enclosed in quotation marks, and paraphrased ideas are clearly attributed to the original source. Another reference includes official TensorFlow documentation (TensorFlow, 2025) for API usage. Explanations and rephrasings, labeled as "ChatGPT rephrased" or similar, were generated by OpenAI's ChatGPT (personal communication, August 7, 2025) to enhance clarity. Citation formatting and supplementary clarifications were assisted by OpenAI's GPT-4o-mini (personal communication, several dates). All sources are acknowledged to maintain transparency and academic integrity. These notes are intended solely for educational use and are shared under fair use policies. For questions or further information, please contact the RCDS AI Club. (generated by GPT-4o-mini)

Notes taken from (an excellent article on the tensor flow keras library):
https://medium.com/@sanjay_dutta/mastering-keras-unleashing-the-power-of-tensorflow-2-0-with-3-different-model-creation-techniques-e7cf9da3fb43

*Disclaimer: A lot of the notes in this document may have been directly quoted from the article provided above. A lot of the text might be the same or very similar. Any quotes are directly taken from the article provided.

APA Citation:
Dutta, S. (2024, April 3). Mastering Keras: Unleashing the power of TensorFlow 2.0 with 3 different model creation techniques. Medium. Retrieved from https://medium.com/@sanjay_dutta/mastering-keras-unleashing-the-power-of-tensorflow-2-0-with-3-different-model-creation-techniques-e7cf9da3fb43

OpenAI. (n.d.). ChatGPT (GPT-4o-mini) [Large language model]. https://chat.openai.com

ChatGPT was used for explaining, revising, and rephrasing.

Overview:

Keras offers a high-level approach to creating neural networks in python. High-level meaning that a lot of the hard work - creating the functions for neural networks, has already been done for you. With Keras, you can create a variety of neural network architectures - from Convolutional Neural Networks, Recurrent Neural Networks, to transformers. First, let's go over the Sequential API in the Keras library.

Sequential API:

Okay, so to create a model using the Sequential API, "you start by importing the necessary libraries and initializing the instance of the sequential class. Then, you can simply add layers to the model using the add method. Each layer can be configured with various parameters, such as the number of neurons, activation function, and input shape." (Dutta, 2024) Okay, now let's create the sample code (I'll be using different dimensions than those described in the article).

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

model = Sequential()
model.add(Dense(128, activation = "relu", input_shape = (30, )))
model.add(Dense(128, activation = "relu"))
model.add(Dense(20, activation = "softmax"))

(Dutta, 2024)

Ok, so it's very straightforward. This is a feed forward network. Tensor flow has an input layer that is not explicitly defined. Let's think about this using the information we learned in Notes Document III. Our input layer is kind of "pre-defined" by tensor flow. The input layer, if you remember, has the shape m x n[l] where m is the number of training samples and n[l] is the input layer. We then get the dot product of the matrix of the input layer by the weights matrix connecting the input layer and first hidden layer. The dimensions of the first hidden layer are n[l] x n[l+1] (where n[l+1] = # of nodes in the first hidden layer and l+1 is the first hidden layer) and if we get the dot product of the input layer and weights matrix connecting the input layer and first hidden layer - we get the dimensions of the first hidden layer m x n[l+1]. This is known as A[l] * W[l+1] (*note we still need to add the bias vector). We then add the bias vector - b[l+1] with dimensions 1 x n[l+1] - to the first hidden layer matrix, giving us Z[l+1] = A[l]*W[l+1] + b[l+1]. The dimensions of Z[l+1] are the same as the dimensions of A[l] * W[l+1] since adding the bias vector does not alter the dimensions of A[l] * W[l+1] - only the values contained within them (represented as Z[l+1]). And then the input_shape information basically tells the model to expect an input with 30 features and an unknown number of training samples (defined in the data). We then apply the activation function, ReLU, to the Z[l+1] layer, giving us A[l]. A[l] is equal to activation (x) which is equal to max(0, Z[l+1]) - for a specific training sample i - just representing the vectorized form. So yeah, that's basically the first layer.

Going back to the specifics of the model, we input the input data of shape m x 30, where m is the number of training samples. Then, the input data is multiplied (using dot product) by the weights matrix connecting the input layer and first hidden layer (with dimensions 30 x 128) - giving us a matrix with dimensions m x 128. We then add the product with the bias vector - giving us a matrix with dimensions m x 128 or Z[l+1]. We then apply the ReLU function to Z[l+1] giving us A[l+1], or the output for the first hidden layer.

We then do basically the same thing in the second layer (just with a different input): Our input data for the second hidden layer has shape m x 128, where m is the number of training samples. Then, the input data is multiplied (using dot product) by the weights matrix connecting the first hidden layer and second hidden layer (with dimensions 128 x 128) - giving us a matrix with dimensions m x 128. We then add the product with the bias vector - giving us a matrix with dimensions m x 128 or Z[l+2]. We then apply the ReLU activation function to Z[l+2] giving us A[l+2], or the output for the second hidden layer. Then, we finish by calculating the probabilities using the softmax activation function - same idea, just now transforming the dimensions once again.

Our input data for the output layer has shape m x 128, where m is the number of training samples. Then, the input data is multiplied (using dot product) by the weights matrix connecting the second hidden layer and output layer (with dimensions 128 x 20) - giving us a matrix with dimensions m x 20. We then add the product with the bias vector - giving us a matrix with dimensions m x 20 or Z[l+3]. We then apply the Softmax activation function to Z[l+3] giving us A[l+3], or the output for the neural network. The output is a probability distribution over 20 classes for each input in the batch. Our output is a matrix with dimensions m x 20.

Okay, so congratulations, you have effectively designed a feed forward neural network using tensor flow keras - much easier than designing a neural network using the methods described in Notes Document III. Using the Sequential API, you can create a variety of layers including "Dense (fully connected), Conv2D (for convolution), Max Pooling2D (max pooling), Dropout (dropout regularization), and many more" (Dutta, 2024)

You still have to use the .fit() function to incorporate backpropagation, but the Sequential API is good for providing the architecture of your neural network.

Functional API:

Although more complex than the Sequential API, the Functional API allows you to modify the architecture more easily and precisely.

import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model

inputs = Input(shape=(30,))
x = Dense(128, activation='relu')(inputs)
x = Dense(128, activation='relu')(x)
outputs = Dense(20, activation='softmax')(x)
model = Model(inputs=inputs, outputs=outputs)

(Dutta, 2024)

I won't go through the technical explanation again since this is the exact same thing as the Sequential API, just adapted for the Functional API syntax. But, I will explain the code. So, basically, we take the inputs, and define that the inputs are going to have shape (m, 30) or (30, ) where m is the number of training samples. We then define the hidden layers using the Functional API syntax and define that the model is going to have an input layer with 30 neurons, the first hidden layer with 128 neurons with activation function = ReLU and input "inputs" (the input layer), the second hidden layer with 128 layers with activation function = ReLU and input "x" (the output of the first hidden layer), and the output layer with 20 neurons with activation function = Softmax and input "x" (the output of the second hidden layer). The output of the output layer is output. Then, so that we can use functions like .fit() and .predict(), we state that the model = Model(inputs = inputs, outputs = outputs), so that we can input "X_train" and output "y_train".

Subclassing API:

import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model

class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.dense1 = Dense(128, activation='relu')
        self.dense2 = Dense(128, activation='relu')
        self.dense3 = Dense(30, activation='softmax')
    
    def call(self, inputs):
        x = self.dense1(inputs)
        x = self.dense2(x)
        return self.dense3(x)

model = MyModel()

(Dutta, 2024) - Modified 20 to 30

"In TensorFlow Keras, subclassing the Model class gives you full control over your model's architecture and computation. Instead of stacking layers sequentially or using the functional API, you create a new class that inherits from tf.keras.Model. Inside your subclass: You override the __init__ method to define your layers as instance variables (e.g., self.dense1 = Dense(...)). You override the call method to define the forward pass, i.e., how inputs flow through the layers you created. When you instantiate and use your subclass (e.g., model = MyModel()), calling model(inputs) triggers the call method and runs your custom computation. Subclassing is powerful because it allows you to implement dynamic architectures, control logic within the forward pass, and easily extend the base model class with your own methods and properties. See that the input layer shape is not defined anywhere - making the model more flexible" (Open AI, n.d.). ChatGPT rephrased.

Acknowledging the Effects of Artificial Intelligence in Our Daily Lives

By: Mark Choi

Click to expand/collapse article

Artificial intelligence continues its rapid spread into our daily lives, and as it continues to develop, it is important to acknowledge what it brings to the table. AI introduces new ways to maximize efficiency and performance in companies, while also posing more ethical questions yet to be fully answered. This article mainly focuses on exploring one of the main topics in the current AI debate: environmental impact. Our goal is to see if there is any way to maximize efficiency with this powerful tool while minimizing the risks that come with its usage.

First it is important to acknowledge how impactful artificial intelligence truly is on the world. The main culprit of the environmental downfall posed by technology is generative AI, which is AI designed to create new content from the information it is backed by. Some of the most common examples of this are Chat GPT and Gemini. MIT explains the exact repercussions that come with the increasing trend of AI application, as it describes the various risks OpenAI's GPT-4 introduces. MIT states that the running of the machine learning model as well as its development in improving itself through fine tuning depends on a large amount of electricity that strains electric grids and increases carbon dioxide emissions such as methane. In addition to this, the drawbacks of hardware in AI puts pressure on water used to cool vital parts (e.g. GPUs) required to operate its system. This water usage as well as the actual transport and demand for such hardware proposes risks to ecosystems.

To understand the scale of what is taking place, it is relevant to acknowledge the current investment into artificial intelligence in the industry and by large companies. MIT states that Amazon harbors over 100 data centers globally that each feature about 50,000 servers devoted to cloud services. MIT states that in these facilities the energy usage in computing is seven to eight times more impactful than that of the typical computational process. According to the UN environment programme, a typical 2kg computer requires 800 kg of raw material, and the microchips that AI use require rare earth elements.

It's important to address these issues as soon as possible, otherwise we introduce irreversible damage to the earth we work so hard to improve. Measures have already been put in place to approach the risks AI proposes into the environment. Artificial intelligence ethics are mainly facilitated by the Department of Defense as well as the National Institute of Standards and Technology (NIST). The risk management framework's perspective on how environmental impacts are as follows. The framework focuses on developing methods for smaller trained models using model distillation or compression. However, it openly acknowledges that there is no agreed upon method to estimate impacts from generative AI. Some of its other suggestions include:

  • "Assess safety to physical environments when deploying GAI systems"
  • "Document anticipated environmental impacts of model development, maintenance, and deployment in product design decisions."
  • "Measure or estimate environmental impacts (e.g., energy and water consumption) for training, fine tuning, and deploying models: Verify tradeoffs between resources used at inference time versus additional resources required at training time."
  • "Verify effectiveness of carbon capture or offset programs for GAI training and applications, and address green-washing concerns"

The main concern with this is that these are all suggestions acting as temporary baselines to solve the AI problem. It does not seem to be displayed that companies must follow the actions provided in their development of new AI models. The department of defense seems to be less focused on environmental issues rather than concerns around enhancement of AI.

UNEP introduces five solutions to fixing the state of AI energy consumption. It lists its approaches by taking actions towards enforcing official standardized procedures by countries, government regulations for disclosing of environmental impacts, more efficient algorithm implementation by companies while recycling water and components, renewable energy approaches, and AI-related environmental policies inclusion in broader regulations. While the entrance into a new era of human evolution is tempting, it is important that we enter this period without simply accepting the harsh ramifications it brings.

Good Quarter, Bad Outlook: Why Applied Materials' Stock Dropped 14%

By: Thomas Chi

Click to expand/collapse article

Applied Materials reported a quarter that looked good at first, but the market punished the stock anyway. The company posted about $7.3 billion in revenue and adjusted earnings near $2.48 per share, beating analysts on both counts. Despite that, management warned that next quarter sales and earnings will fall, and investors sliced the stock by roughly 14 percent. That reaction shows how much markets focus on the future, not just on last quarter's numbers.

The core problem is low visibility. China bought a lot of semiconductor equipment in previous quarters and is now "digesting" that capacity, which means fewer new orders for the near term. At the same time, several leading-edge customers are buying unevenly, making revenue appear lumpy from quarter to quarter. Because Applied makes the expensive machines that factories use to produce chips, its revenue is closely tied to the timing of big capital projects. A pause in orders from a single region or a few big customers can translate into a sharp, immediate impact on results.

It is also important to look at the difference between adjusted earnings and GAAP earnings. Adjusted numbers often remove one-time items and make results look smoother. That is useful for understanding ongoing operations, but GAAP shows profit according to standard rules and can sometimes be lower. When adjusted results look strong while GAAP lags, traders get nervous, and that can amplify stock moves. In AMAT's case the headline earnings gain felt less secure because the GAAP figure was noticeably smaller than the adjusted number.

Beyond the company's own numbers, there is a political layer that changes how cycles behave. Trade tensions, export controls, and government incentives can speed up spending in one period and slow it down in another. Those policies are not background noise, they are actual business variables. For example, a government subsidy program or a rule that restricts technology exports can push a region to buy equipment fast, and then stop buying for a while. That creates the boom and bust pattern that equipment makers feel first.

What should an investor take from this? First, read beyond the headlines. A beat does not guarantee a rally if future guidance is weak. Second, distinguish cyclical problems from structural changes. A cyclical pause is not the same as a long-term loss of demand. Third, match your point of view to your time horizon. For a long-term investor, a single quarter of soft orders might simply be noise and a potential buying opportunity. For a short-term trader, that same quarter is a real reason to reduce exposure.

Incoming Geopolitical Shift as U.S.-Russia Talks Raise Pressure on Energy and Defense Markets

By: Rehan

Click to expand/collapse article

Defense and energy markets are drawing attention this week as European leaders work to support Ukraine in high-level talks with U.S. President Donald Trump. These talks could increase pressure on Kyiv to accept a peace agreement that favors Russia. Investors are also focused on the probability of a U.S. policy shift toward Russia. Such a move could include cooperation on Arctic energy projects. The Arctic holds an estimated 15% of the world's undiscovered oil and nearly 30% of its natural gas. Any agreement on joint exploration would signal a major geopolitical change and force Europe to increase defense spending at a faster pace.

Trump and Russian President Vladimir Putin met in Alaska over the weekend to attempt to reach a ceasefire for Ukraine, which they failed to do. After the summit, Trump said he wanted a quick peace settlement that Ukraine should accept. Ukrainian President Volodymyr Zelenskiy is now travelling to Washington for further talks. Leaders from Germany, the UK, and France are expected to join those discussions.

Economists and analysts believe U.S. support for Ukraine may decline. Holger Schmieding, chief economist at Berenberg, said Trump seems inclined to reduce or end U.S. backing for Kyiv. He noted that Putin has drawn Trump's attention to potential business opportunities. Schmieding suggested that the U.S. could lift sanctions on Russia and direct investment there instead. That would leave Europe to finance its own security, adding significant costs for NATO allies.

Markets have already been pricing in these risks. Since early 2022, European aerospace and defense stocks have surged. Shares in Italy's Leonardo are up more than 600%, while Germany's Rheinmetall has gained about 1,500%. The euro has also strengthened this year rising 13% against the dollar to trade near $1.17. Energy markets face uncertainty. Brent crude fell more than 1% on Friday to about $66 per barrel, but analysts say prices still reflect expectations of a peace deal. Micheal Hartnett warned that U.S.-Russia drilling in the Arctic could lead to an energy oversupply and push the market into a deep downturn. Trump has made clear he wants lower energy prices for American consumers.

A Review on TensorFlow Using Existing Sources

By: Felipe Q. (Rye Country Day School AI Club - Rye, NY)

Download Full Article PDF
Click to expand/collapse article summary

This document, prepared by the RCDS AI Club...


Views:

Loading...