TensorFlow — Deep Learning Framework
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
Best use case
TensorFlow — Deep Learning Framework is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
Teams using TensorFlow — Deep Learning Framework should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/tensorflow/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How TensorFlow — Deep Learning Framework Compares
| Feature / Agent | TensorFlow — Deep Learning Framework | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# TensorFlow — Deep Learning Framework
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
## Core Capabilities
### Keras API (High-Level)
```python
import tensorflow as tf
from tensorflow import keras
# Sequential model for simple architectures
model = keras.Sequential([
keras.layers.Input(shape=(784,)),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dropout(0.3),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train
history = model.fit(
x_train, y_train,
epochs=20,
batch_size=64,
validation_split=0.2,
callbacks=[
keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True),
keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=2),
keras.callbacks.ModelCheckpoint("best_model.keras", save_best_only=True),
],
)
```
### Functional API (Complex Architectures)
```python
# Multi-input, multi-output model
text_input = keras.Input(shape=(None,), dtype="int32", name="text")
image_input = keras.Input(shape=(224, 224, 3), name="image")
# Text branch
x = keras.layers.Embedding(vocab_size, 128)(text_input)
x = keras.layers.LSTM(64)(x)
# Image branch
y = keras.applications.EfficientNetV2B0(include_top=False, pooling="avg")(image_input)
y = keras.layers.Dense(128, activation="relu")(y)
# Combine
combined = keras.layers.Concatenate()([x, y])
combined = keras.layers.Dense(64, activation="relu")(combined)
# Multiple outputs
category = keras.layers.Dense(num_categories, activation="softmax", name="category")(combined)
sentiment = keras.layers.Dense(1, activation="sigmoid", name="sentiment")(combined)
model = keras.Model(
inputs=[text_input, image_input],
outputs=[category, sentiment],
)
```
### Custom Training Loop
```python
# Fine-grained control over training
@tf.function # JIT compile for performance
def train_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
predictions = model(x, training=True)
loss = loss_fn(y, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
# Training loop
for epoch in range(num_epochs):
for batch_x, batch_y in train_dataset:
loss = train_step(model, optimizer, batch_x, batch_y)
# Validation
val_loss = tf.reduce_mean([
loss_fn(y, model(x, training=False))
for x, y in val_dataset
])
print(f"Epoch {epoch}: loss={loss:.4f}, val_loss={val_loss:.4f}")
```
### Deployment
```python
# Save model
model.save("my_model.keras") # Keras format
model.export("saved_model/") # SavedModel format (TF Serving)
# TFLite for mobile
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] # Quantize
tflite_model = converter.convert()
with open("model.tflite", "wb") as f:
f.write(tflite_model)
# TensorFlow Serving (Docker)
# docker run -p 8501:8501 --mount type=bind,source=/models,target=/models \
# -e MODEL_NAME=my_model tensorflow/serving
# REST API inference
import requests
response = requests.post(
"http://localhost:8501/v1/models/my_model:predict",
json={"instances": x_test[:5].tolist()},
)
predictions = response.json()["predictions"]
```
## Installation
```bash
pip install tensorflow # CPU + GPU (auto-detects)
pip install tensorflow-metal # macOS GPU (Apple Silicon)
# GPU requires CUDA 12.x + cuDNN 8.x
```
## Best Practices
1. **Keras first** — Use `keras.Sequential` or Functional API; drop to custom training loops only when needed
2. **tf.data for pipelines** — Use `tf.data.Dataset` for data loading; `.batch().prefetch(tf.data.AUTOTUNE)` for performance
3. **Mixed precision** — `keras.mixed_precision.set_global_policy("mixed_float16")` for 2x speedup on modern GPUs
4. **Transfer learning** — Start from pre-trained models (EfficientNet, ResNet, BERT); fine-tune top layers first
5. **Callbacks** — EarlyStopping prevents overfitting, ReduceLROnPlateau adapts learning rate, ModelCheckpoint saves best model
6. **@tf.function** — Decorate custom training steps; TF compiles the graph for 2-5x speedup
7. **TFLite for edge** — Convert and quantize for mobile deployment; INT8 quantization reduces size 4x
8. **TensorBoard** — `keras.callbacks.TensorBoard(log_dir)` for training visualization; `tensorboard --logdir logs`Related Skills
engineering-features-for-machine-learning
This skill empowers Claude to perform feature engineering tasks for machine learning. It creates, selects, and transforms features to improve model performance. Use this skill when the user requests feature creation, feature selection, feature transformation, or any request that involves improving the features used in a machine learning model. Trigger terms include "feature engineering", "feature selection", "feature transformation", "create features", "select features", "transform features", "improve model performance", and similar phrases related to feature manipulation.
explaining-machine-learning-models
Build this skill enables AI assistant to provide interpretability and explainability for machine learning models. it is triggered when the user requests explanations for model predictions, insights into feature importance, or help understanding model behavior... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
exa-migration-deep-dive
Migrate from other search APIs (Google, Bing, Tavily, Serper) to Exa neural search. Use when switching to Exa from another search provider, migrating search pipelines, or evaluating Exa as a replacement for traditional search APIs. Trigger with phrases like "migrate to exa", "switch to exa", "replace google search with exa", "exa vs tavily", "exa migration", "move to exa".
evernote-migration-deep-dive
Deep dive into Evernote data migration strategies. Use when migrating to/from Evernote, bulk data transfers, or complex migration scenarios. Trigger with phrases like "migrate to evernote", "migrate from evernote", "evernote data transfer", "bulk evernote migration".
evaluating-machine-learning-models
Build this skill allows AI assistant to evaluate machine learning models using a comprehensive suite of metrics. it should be used when the user requests model performance analysis, validation, or testing. AI assistant can use this skill to assess model accuracy, p... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
documenso-migration-deep-dive
Execute comprehensive Documenso migration strategies for platform switches. Use when migrating from other signing platforms, re-platforming to Documenso, or performing major infrastructure changes. Trigger with phrases like "migrate to documenso", "documenso migration", "switch to documenso", "documenso replatform", "replace docusign".
deploying-machine-learning-models
Deploy this skill enables AI assistant to deploy machine learning models to production environments. it automates the deployment workflow, implements best practices for serving models, optimizes performance, and handles potential errors. use this skill when th... Use when deploying or managing infrastructure. Trigger with phrases like 'deploy', 'infrastructure', or 'CI/CD'.
deepgram-webhooks-events
Implement Deepgram callback and webhook handling for async transcription. Use when implementing callback URLs, processing async transcription results, or handling Deepgram event notifications. Trigger: "deepgram callback", "deepgram webhook", "async transcription", "deepgram events", "deepgram notifications", "deepgram async".
deepgram-upgrade-migration
Plan and execute Deepgram SDK upgrades and model migrations. Use when upgrading SDK versions (v3->v4->v5), migrating models (Nova-2 to Nova-3), or planning API version transitions. Trigger: "upgrade deepgram", "deepgram migration", "update deepgram SDK", "deepgram version upgrade", "nova-3 migration".
deepgram-security-basics
Apply Deepgram security best practices for API key management and data protection. Use when securing Deepgram integrations, implementing key rotation, or auditing security configurations. Trigger: "deepgram security", "deepgram API key security", "secure deepgram", "deepgram key rotation", "deepgram data protection", "deepgram PII redaction".
deepgram-sdk-patterns
Apply production-ready Deepgram SDK patterns for TypeScript and Python. Use when implementing Deepgram integrations, refactoring SDK usage, or establishing team coding standards for Deepgram. Trigger: "deepgram SDK patterns", "deepgram best practices", "deepgram code patterns", "idiomatic deepgram", "deepgram typescript".
deepgram-reference-architecture
Implement Deepgram reference architecture for scalable transcription systems. Use when designing transcription pipelines, building production architectures, or planning Deepgram integration at scale. Trigger: "deepgram architecture", "transcription pipeline", "deepgram system design", "deepgram at scale", "enterprise deepgram", "deepgram queue".