feat(skills): add 7 CS academic writing and resume building skills#1488
feat(skills): add 7 CS academic writing and resume building skills#1488f3r21 wants to merge 4 commits intoaffaan-m:mainfrom
Conversation
Seven skills from the "CS academic writing and resume building" batch drifted from each other in output format, had large amounts of duplicated content between sibling validators, leaked third-party platform branding into descriptions, and were missing evals/schemas needed to regression-test them. This pass aligns them to the repo pattern documented in docs/SKILL-DEVELOPMENT-GUIDE.md. CS paper pipeline (paper-structure-cs, abstract-methods-results-cs, sentence-clarity-cs, academic-final-review-cs): - academic-final-review-cs converted from pipe-delimited to JSONL so all four CS skills share a parseable output format. - Each SKILL.md now shows a concrete example object with every required field; duplicated sections trimmed (notably sentence clarity's repeated output block and paper-structure's heading- convention prose that duplicated the JSONL categories). - Removed the underspecified `depends_on` field from paper-structure output; issues are ordered instead. - Added cross-links so each skill points to its place in the structure -> sections -> sentences -> final-review pipeline. Resume pipeline (harvard-resume-validator, kickass-resume-validator, resume-job-alignment): - Removed all "Cowork" platform references from descriptions and bodies; generic resume skills no longer name a specific third-party tool. - Dropped non-standard `compatibility:` frontmatter field; added `tags:` per the documented schema. - Extracted shared content (action-verb bank, LaTeX ATS pitfalls, one-page formatting rules, weak-to-strong phrase map) into skills/_shared/resume-common.md; both validators now link there instead of inlining ~60% duplicated prose. - Each validator keeps only what is specific to it and surfaces a "Differs from <sibling>" callout so the GPA-threshold and section- ordering disagreements are visible rather than buried. - resume-job-alignment trimmed from 618 lines by linking validators rather than restating their rules; alignment-specific content (keyword extraction, scenarios, before/after) preserved. Schemas and evals added to every skill: - schema/output.schema.json for all 7 skills (JSON Schema draft-07, ajv-validated; doc examples validated against their schemas). - evals/evals.json added for academic-final-review-cs (rewritten for JSONL), harvard-resume-validator, kickass-resume-validator, and resume-job-alignment; the three skills that already had evals are retained unchanged where possible. Verification: - All 14 JSON files parse; all 7 schemas compile under ajv. - Every SKILL.md example object validates against its schema. - `npx markdownlint-cli` passes on all 7 skills plus _shared under the repo's .markdownlint.json (fixed a pre-existing MD025 in abstract-methods-results-cs). - Frontmatter on every skill contains only documented keys (name, description, origin, tags). https://claude.ai/code/session_0189fb3hKKS8jmCwCMp4EFsX
…nd CI Adds two install modules (academic-writing, resume-toolkit) to the manifest so ./install.sh --profile full picks up the 7 CS academic and resume skills added in 48338e5. Also wires academic-writing into the research profile and adds matching components. - Extend npm publish files allowlist with the 7 skill paths and skills/_shared/ - Teach validate-skills.js to skip underscore-prefixed dirs so skills/_shared/ is not required to ship a SKILL.md - Sync README/AGENTS skill counts (183 to 190) in English and zh-CN
|
ECC bundle files are already tracked in this repository. Skipping generation of another bundle PR. |
📝 WalkthroughWalkthroughThis PR expands the plugin's skill collection from 183 to 190 by introducing two new skill modules: Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryAdds 7 new skills in two clusters — 4 CS academic writing helpers ( Confidence Score: 5/5Safe to merge; all findings are P2 style/naming suggestions with no correctness or schema-validity impact. No P0 or P1 findings. The three P2 comments are: (1) a naming inconsistency in top_3_fixes schema constraints, (2) an ambiguous LaTeX-check table in academic-final-review-cs SKILL.md, and (3) a missing inline comment in validate-skills.js. None affect runtime behavior, schema validation, or installation correctness. skills/academic-final-review-cs/SKILL.md (LaTeX table ambiguity) and skills/harvard-resume-validator/schema/output.schema.json + skills/kickass-resume-validator/schema/output.schema.json (top_3_fixes naming vs. minItems constraint) Important Files Changed
Reviews (1): Last reviewed commit: "feat(install): register CS academic and ..." | Re-trigger Greptile |
|
|
||
| If the paper is written in LaTeX, additionally check: | ||
|
|
||
| | Item | What to verify | | ||
| |------|-----------------| | ||
| | No overfull/underfull boxes | Compile with `pdflatex` and check `.log` for warnings; fix line breaks | | ||
| | No widow/orphan lines | Last line of paragraph appears alone; first line appears alone at page break | | ||
| | Float placement | Figures/tables appear reasonably close to first citation (same page or next) | | ||
| | Bibliography style matches | `.bst` file (plain, acm, ieeetr, etc.) matches venue requirements | | ||
| | Package conflicts | No conflicting packages (e.g., both `geometry` and `fullpage`); check console | | ||
| | PDF metadata | PDF title, author, and keywords match document content | | ||
|
|
There was a problem hiding this comment.
LaTeX-specific items not representable in schema output
The LaTeX-specific checks table (lines 134–145) lists 6 items (No overfull/underfull boxes, No widow/orphan lines, Float placement, Bibliography style matches, Package conflicts, PDF metadata) in the same table format as the canonical checklist, but none of them appear in the item enum of schema/output.schema.json. The "Output Requirements" section further says "Do NOT add custom items; stick to the canonical list."
A future maintainer or a model reading this section in isolation could reasonably conclude these items should appear as output objects — but any attempt to emit them would produce schema-invalid JSONL. A comment or explicit note (e.g., "Use these to inform guidance on the Font and margins and Headings consistent canonical items — do not emit them as separate JSONL lines") would prevent the ambiguity.
| "type": "array", | ||
| "items": { "type": "string" }, | ||
| "minItems": 1, | ||
| "maxItems": 3 | ||
| } |
There was a problem hiding this comment.
top_3_fixes name implies three items but schema allows 1–3
The field is named top_3_fixes, but "minItems": 1, "maxItems": 3 means it can legally contain just one or two items. For a resume with very few issues, an LLM might return a single-item array and be fully schema-valid, which clashes with the name and the SKILL.md narrative ("Three top fixes"). The same constraint exists in kickass-resume-validator/schema/output.schema.json. Consider renaming to top_fixes (and updating the SKILL.md prose) or locking it to "minItems": 3, "maxItems": 3.
| let validCount = 0; | ||
|
|
||
| for (const dir of dirs) { | ||
| if (dir.startsWith('_')) continue; |
There was a problem hiding this comment.
Underscore-skip is a global convention with no documentation
| if (dir.startsWith('_')) continue; | |
| if (dir.startsWith('_')) continue; // skip shared/helper dirs (not skills themselves) |
The single-line addition silently extends the _ prefix to mean "not a skill, skip validation" for any future directory. Adding an inline comment makes the convention explicit so future contributors don't accidentally name a real skill _my-skill/ and wonder why CI never validates it.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
scripts/ci/validate-skills.js (1)
24-26: Keep_sharedexempt fromSKILL.md, but still validate its contents.This blanket skip means
skills/_shared/resume-common.mdcan be deleted or emptied while dependent skills still pass CI. Add a lightweight shared-resource check beforecontinue, or restrict the skip to known shared directories and verify expected files are readable/non-empty.🧪 Proposed validation guard for shared skill resources
for (const dir of dirs) { - if (dir.startsWith('_')) continue; + if (dir.startsWith('_')) { + const sharedDir = path.join(SKILLS_DIR, dir); + const markdownFiles = fs + .readdirSync(sharedDir, { withFileTypes: true }) + .filter(e => e.isFile() && e.name.endsWith('.md')); + + if (markdownFiles.length === 0) { + console.error(`ERROR: ${dir}/ - Shared directory has no markdown resources`); + hasErrors = true; + continue; + } + + for (const file of markdownFiles) { + const filePath = path.join(sharedDir, file.name); + const content = fs.readFileSync(filePath, 'utf-8'); + if (content.trim().length === 0) { + console.error(`ERROR: ${dir}/${file.name} - Empty file`); + hasErrors = true; + } + } + continue; + } const skillMd = path.join(SKILLS_DIR, dir, 'SKILL.md');Based on learnings, place curated skills in the
skills/directory; generated/imported skills go under~/.claude/skills/.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/ci/validate-skills.js` around lines 24 - 26, The loop that skips directories starting with '_' (dirs and the skillMd variable built from SKILLS_DIR and path.join) currently allows skills/_shared to be ignored entirely; change the logic so that before the blanket "continue" you special-case the "_shared" folder: when dir === '_shared' perform a lightweight validation (e.g., check that expected shared files such as "resume-common.md" exist, are readable and non-empty under path.join(SKILLS_DIR, '_shared', ...)); if those checks fail, throw or mark CI failure, otherwise skip SKILL.md validation for _shared; other underscore-prefixed dirs should continue to be skipped as before.skills/kickass-resume-validator/schema/output.schema.json (1)
19-27: Harden required string fields withminLength: 1.
file_analyzed,alignment_to_kickass,ats_compliance, andsummaryare required but can still be empty strings.Proposed hardening
- "file_analyzed": { "type": "string" }, + "file_analyzed": { "type": "string", "minLength": 1 }, @@ - "alignment_to_kickass": { "type": "string" }, - "ats_compliance": { "type": "string" }, - "summary": { "type": "string" }, + "alignment_to_kickass": { "type": "string", "minLength": 1 }, + "ats_compliance": { "type": "string", "minLength": 1 }, + "summary": { "type": "string", "minLength": 1 },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/kickass-resume-validator/schema/output.schema.json` around lines 19 - 27, The schema allows required string fields to be empty; add "minLength": 1 to the string definitions for file_analyzed, alignment_to_kickass, ats_compliance, and summary in output.schema.json so these fields cannot be empty strings (update the properties for "file_analyzed", "alignment_to_kickass", "ats_compliance", and "summary" to include minLength: 1 while leaving existing types/enums intact and ensuring the overall JSON Schema remains valid).skills/harvard-resume-validator/schema/output.schema.json (1)
10-12: AddminLength: 1to required text fields for parity and quality.
file_analyzed,alignment_to_harvard, andsummaryshould not validate as empty strings.Proposed hardening
- "file_analyzed": { "type": "string" }, - "alignment_to_harvard": { "type": "string" }, - "summary": { "type": "string" }, + "file_analyzed": { "type": "string", "minLength": 1 }, + "alignment_to_harvard": { "type": "string", "minLength": 1 }, + "summary": { "type": "string", "minLength": 1 },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/harvard-resume-validator/schema/output.schema.json` around lines 10 - 12, The schema currently allows empty strings for the text fields; add "minLength": 1 to each string property to prevent empty values — update the JSON schema properties "file_analyzed", "alignment_to_harvard", and "summary" in output.schema.json to include "minLength": 1 alongside "type": "string" so they fail validation for empty strings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 1331: The Skills table row currently shows Codex with "10 (native
format)" which is inconsistent with the Codex section above reporting 30 skills;
update the table cell in the row string "| **Skills** | 190 | Shared | 10
(native format) | 37 |" to read "30 (native format)" so the Codex skills count
matches the earlier "Codex" section.
In `@skills/abstract-methods-results-cs/schema/output.schema.json`:
- Around line 18-32: The schema currently allows any problem_type enum
regardless of section; add JSON Schema conditional rules using if/then that
check the document's section (e.g., if: { properties: { section: { const:
"introduction" } }, required: ["section"] }) and in each then: restrict the
problem_type enum to the allowed subset for that section (reference the existing
"problem_type" enum values like "vague_objective","missing_context", etc.),
repeating one if/then per section to enforce the documented mappings; ensure the
top-level schema keeps the original "problem_type" definition but to be
validated narrow it inside the respective then blocks so mismatched
section/category combinations are rejected.
In `@skills/academic-final-review-cs/SKILL.md`:
- Around line 250-256: Update the "**Binary thinking**" bullet to remove the
claim that items are strictly binary and instead explain the three allowed
evaluator states (PASS/FAIL/WARN) and how they map to presence/quality (e.g.,
PASS = present and acceptable, WARN = present but needs improvement, FAIL =
missing or blocking). Locate the "**Binary thinking**" bullet in SKILL.md and
replace its text with a concise rule that prevents evaluator drift by defining
the three states and giving one-line guidance on when to use each.
In `@skills/resume-job-alignment/schema/output.schema.json`:
- Around line 19-82: The schema currently allows empty strings/arrays (e.g.,
"job_title", "resume_analyzed", "overall_alignment", the properties inside
"alignment_breakdown", and array fields "key_matches", "gaps",
"tailoring_suggestions", "priority_roadmap"), which permits vacuous outputs;
update those required string properties (job_title, resume_analyzed,
overall_alignment, alignment_breakdown.required_qualifications, nice_to_have,
technical_skills, soft_skills, domain_knowledge, and all string props inside gap
items and tailoring_suggestions items) to include "minLength": 1, and add
"minItems": 1 to arrays that must return at least one element (key_matches,
gaps, tailoring_suggestions, priority_roadmap) so the validator rejects
empty/blank results while keeping existing "additionalProperties": false and the
"priority" enum unchanged.
In `@skills/sentence-clarity-cs/evals/evals.json`:
- Around line 6-7: The expected_output entry mistakenly asserts "47+ words" for
the first sentence length; update the "expected_output" value in the evals.json
block so it no longer contains the inaccurate numeric count—either replace "47+
words → shortened" with a non-numeric descriptor like "long sentence →
shortened" or compute and insert the correct word count, and ensure the string
still references the other checks (weak_verb, ambiguous_pronoun) so the keys
"prompt" and "expected_output" remain consistent.
---
Nitpick comments:
In `@scripts/ci/validate-skills.js`:
- Around line 24-26: The loop that skips directories starting with '_' (dirs and
the skillMd variable built from SKILLS_DIR and path.join) currently allows
skills/_shared to be ignored entirely; change the logic so that before the
blanket "continue" you special-case the "_shared" folder: when dir === '_shared'
perform a lightweight validation (e.g., check that expected shared files such as
"resume-common.md" exist, are readable and non-empty under path.join(SKILLS_DIR,
'_shared', ...)); if those checks fail, throw or mark CI failure, otherwise skip
SKILL.md validation for _shared; other underscore-prefixed dirs should continue
to be skipped as before.
In `@skills/harvard-resume-validator/schema/output.schema.json`:
- Around line 10-12: The schema currently allows empty strings for the text
fields; add "minLength": 1 to each string property to prevent empty values —
update the JSON schema properties "file_analyzed", "alignment_to_harvard", and
"summary" in output.schema.json to include "minLength": 1 alongside "type":
"string" so they fail validation for empty strings.
In `@skills/kickass-resume-validator/schema/output.schema.json`:
- Around line 19-27: The schema allows required string fields to be empty; add
"minLength": 1 to the string definitions for file_analyzed,
alignment_to_kickass, ats_compliance, and summary in output.schema.json so these
fields cannot be empty strings (update the properties for "file_analyzed",
"alignment_to_kickass", "ats_compliance", and "summary" to include minLength: 1
while leaving existing types/enums intact and ensuring the overall JSON Schema
remains valid).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a043fa95-f601-469d-9f95-c02d41b56bb3
📒 Files selected for processing (32)
AGENTS.mdREADME.mdREADME.zh-CN.mddocs/zh-CN/AGENTS.mddocs/zh-CN/README.mdmanifests/install-components.jsonmanifests/install-modules.jsonmanifests/install-profiles.jsonpackage.jsonscripts/ci/validate-skills.jsskills/_shared/resume-common.mdskills/abstract-methods-results-cs/SKILL.mdskills/abstract-methods-results-cs/evals/evals.jsonskills/abstract-methods-results-cs/schema/output.schema.jsonskills/academic-final-review-cs/SKILL.mdskills/academic-final-review-cs/evals/evals.jsonskills/academic-final-review-cs/schema/output.schema.jsonskills/harvard-resume-validator/SKILL.mdskills/harvard-resume-validator/evals/evals.jsonskills/harvard-resume-validator/schema/output.schema.jsonskills/kickass-resume-validator/SKILL.mdskills/kickass-resume-validator/evals/evals.jsonskills/kickass-resume-validator/schema/output.schema.jsonskills/paper-structure-cs/SKILL.mdskills/paper-structure-cs/evals/evals.jsonskills/paper-structure-cs/schema/output.schema.jsonskills/resume-job-alignment/SKILL.mdskills/resume-job-alignment/evals/evals.jsonskills/resume-job-alignment/schema/output.schema.jsonskills/sentence-clarity-cs/SKILL.mdskills/sentence-clarity-cs/evals/evals.jsonskills/sentence-clarity-cs/schema/output.schema.json
| | **Agents** | 48 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 | | ||
| | **Commands** | 79 | Shared | Instruction-based | 31 | | ||
| | **Skills** | 183 | Shared | 10 (native format) | 37 | | ||
| | **Skills** | 190 | Shared | 10 (native format) | 37 | |
There was a problem hiding this comment.
Keep the Codex skills count consistent in this row.
Line 1331 still says Codex has 10 (native format) skills, but the Codex section above reports 30 skills at Line 1135. Since this row is already being updated, align the Codex count here too.
📝 Proposed docs fix
-| **Skills** | 190 | Shared | 10 (native format) | 37 |
+| **Skills** | 190 | Shared | 30 (native format) | 37 |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | **Skills** | 190 | Shared | 10 (native format) | 37 | | |
| | **Skills** | 190 | Shared | 30 (native format) | 37 | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 1331, The Skills table row currently shows Codex with "10
(native format)" which is inconsistent with the Codex section above reporting 30
skills; update the table cell in the row string "| **Skills** | 190 | Shared |
10 (native format) | 37 |" to read "30 (native format)" so the Codex skills
count matches the earlier "Codex" section.
| "problem_type": { | ||
| "type": "string", | ||
| "enum": [ | ||
| "vague_objective", | ||
| "missing_context", | ||
| "unsupported_claim", | ||
| "passive_overuse", | ||
| "incomplete_description", | ||
| "reproducibility_gap", | ||
| "missing_detail", | ||
| "undefined_terms", | ||
| "premature_conclusion", | ||
| "incomplete_data" | ||
| ] | ||
| }, |
There was a problem hiding this comment.
Bind problem_type to section with conditional schema rules.
Right now, section/category mismatches are schema-valid. Add if/then constraints so each section only permits its documented categories.
Proposed schema tightening
"properties": {
@@
"problem_type": {
"type": "string",
"enum": [
"vague_objective",
"missing_context",
"unsupported_claim",
"passive_overuse",
"incomplete_description",
"reproducibility_gap",
"missing_detail",
"undefined_terms",
"premature_conclusion",
"incomplete_data"
]
},
@@
- }
+ },
+ "allOf": [
+ {
+ "if": { "properties": { "section": { "const": "abstract" } } },
+ "then": {
+ "properties": {
+ "problem_type": {
+ "enum": ["vague_objective", "missing_context", "unsupported_claim", "passive_overuse"]
+ }
+ }
+ }
+ },
+ {
+ "if": { "properties": { "section": { "const": "methods" } } },
+ "then": {
+ "properties": {
+ "problem_type": {
+ "enum": ["incomplete_description", "reproducibility_gap", "passive_overuse", "missing_detail", "undefined_terms"]
+ }
+ }
+ }
+ },
+ {
+ "if": { "properties": { "section": { "const": "results" } } },
+ "then": {
+ "properties": {
+ "problem_type": {
+ "enum": ["premature_conclusion", "unsupported_claim", "passive_overuse", "missing_context", "incomplete_data"]
+ }
+ }
+ }
+ }
+ ]
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/abstract-methods-results-cs/schema/output.schema.json` around lines 18
- 32, The schema currently allows any problem_type enum regardless of section;
add JSON Schema conditional rules using if/then that check the document's
section (e.g., if: { properties: { section: { const: "introduction" } },
required: ["section"] }) and in each then: restrict the problem_type enum to the
allowed subset for that section (reference the existing "problem_type" enum
values like "vague_objective","missing_context", etc.), repeating one if/then
per section to enforce the documented mappings; ensure the top-level schema
keeps the original "problem_type" definition but to be validated narrow it
inside the respective then blocks so mismatched section/category combinations
are rejected.
| - **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status. | ||
| - **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes) | ||
| - **Completeness**: Perform the full checklist; don't skip items | ||
| - **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough | ||
| - **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv) | ||
| - **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies) | ||
|
|
There was a problem hiding this comment.
Resolve checklist-state contradiction (binary vs PASS/FAIL/WARN).
This section says each item is binary, but the contract explicitly allows three states. Please reword this to avoid evaluator drift.
Suggested wording update
-- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
+- **Deterministic statusing**: Assign exactly one status per item (`PASS`, `FAIL`, or `WARN`) based on clear evidence; avoid ambiguous judgments.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status. | |
| - **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes) | |
| - **Completeness**: Perform the full checklist; don't skip items | |
| - **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough | |
| - **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv) | |
| - **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies) | |
| - **Deterministic statusing**: Assign exactly one status per item (`PASS`, `FAIL`, or `WARN`) based on clear evidence; avoid ambiguous judgments. | |
| - **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes) | |
| - **Completeness**: Perform the full checklist; don't skip items | |
| - **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough | |
| - **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv) | |
| - **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/academic-final-review-cs/SKILL.md` around lines 250 - 256, Update the
"**Binary thinking**" bullet to remove the claim that items are strictly binary
and instead explain the three allowed evaluator states (PASS/FAIL/WARN) and how
they map to presence/quality (e.g., PASS = present and acceptable, WARN =
present but needs improvement, FAIL = missing or blocking). Locate the "**Binary
thinking**" bullet in SKILL.md and replace its text with a concise rule that
prevents evaluator drift by defining the three states and giving one-line
guidance on when to use each.
| "job_title": { "type": "string" }, | ||
| "resume_analyzed": { "type": "string" }, | ||
| "overall_alignment": { "type": "string" }, | ||
| "alignment_breakdown": { | ||
| "type": "object", | ||
| "required": [ | ||
| "required_qualifications", | ||
| "nice_to_have", | ||
| "technical_skills", | ||
| "soft_skills", | ||
| "domain_knowledge" | ||
| ], | ||
| "additionalProperties": false, | ||
| "properties": { | ||
| "required_qualifications": { "type": "string" }, | ||
| "nice_to_have": { "type": "string" }, | ||
| "technical_skills": { "type": "string" }, | ||
| "soft_skills": { "type": "string" }, | ||
| "domain_knowledge": { "type": "string" } | ||
| } | ||
| }, | ||
| "key_matches": { | ||
| "type": "array", | ||
| "items": { "type": "string" } | ||
| }, | ||
| "gaps": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "required": ["requirement", "your_status", "priority", "suggestion"], | ||
| "additionalProperties": false, | ||
| "properties": { | ||
| "requirement": { "type": "string" }, | ||
| "your_status": { "type": "string" }, | ||
| "priority": { "type": "string", "enum": ["high", "medium", "low"] }, | ||
| "suggestion": { "type": "string" } | ||
| } | ||
| } | ||
| }, | ||
| "tailoring_suggestions": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "required": [ | ||
| "section", | ||
| "current_bullet", | ||
| "job_focus", | ||
| "suggested_rewrite", | ||
| "why_better" | ||
| ], | ||
| "additionalProperties": false, | ||
| "properties": { | ||
| "section": { "type": "string" }, | ||
| "current_bullet": { "type": "string" }, | ||
| "job_focus": { "type": "string" }, | ||
| "suggested_rewrite": { "type": "string" }, | ||
| "why_better": { "type": "string" } | ||
| } | ||
| } | ||
| }, | ||
| "priority_roadmap": { | ||
| "type": "array", | ||
| "items": { "type": "string" } | ||
| } |
There was a problem hiding this comment.
Reject vacuous schema-valid outputs.
Most required strings and arrays can still be empty, so { "job_title": "", "key_matches": [], ... } can pass shape validation while carrying no alignment signal. Consider adding minLength: 1 to required strings and minItems: 1 where the skill should always emit at least one finding/suggestion.
🛡️ Proposed schema hardening pattern
- "job_title": { "type": "string" },
- "resume_analyzed": { "type": "string" },
- "overall_alignment": { "type": "string" },
+ "job_title": { "type": "string", "minLength": 1 },
+ "resume_analyzed": { "type": "string", "minLength": 1 },
+ "overall_alignment": { "type": "string", "minLength": 1 },
...
"key_matches": {
"type": "array",
+ "minItems": 1,
"items": { "type": "string" }
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "job_title": { "type": "string" }, | |
| "resume_analyzed": { "type": "string" }, | |
| "overall_alignment": { "type": "string" }, | |
| "alignment_breakdown": { | |
| "type": "object", | |
| "required": [ | |
| "required_qualifications", | |
| "nice_to_have", | |
| "technical_skills", | |
| "soft_skills", | |
| "domain_knowledge" | |
| ], | |
| "additionalProperties": false, | |
| "properties": { | |
| "required_qualifications": { "type": "string" }, | |
| "nice_to_have": { "type": "string" }, | |
| "technical_skills": { "type": "string" }, | |
| "soft_skills": { "type": "string" }, | |
| "domain_knowledge": { "type": "string" } | |
| } | |
| }, | |
| "key_matches": { | |
| "type": "array", | |
| "items": { "type": "string" } | |
| }, | |
| "gaps": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "required": ["requirement", "your_status", "priority", "suggestion"], | |
| "additionalProperties": false, | |
| "properties": { | |
| "requirement": { "type": "string" }, | |
| "your_status": { "type": "string" }, | |
| "priority": { "type": "string", "enum": ["high", "medium", "low"] }, | |
| "suggestion": { "type": "string" } | |
| } | |
| } | |
| }, | |
| "tailoring_suggestions": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "required": [ | |
| "section", | |
| "current_bullet", | |
| "job_focus", | |
| "suggested_rewrite", | |
| "why_better" | |
| ], | |
| "additionalProperties": false, | |
| "properties": { | |
| "section": { "type": "string" }, | |
| "current_bullet": { "type": "string" }, | |
| "job_focus": { "type": "string" }, | |
| "suggested_rewrite": { "type": "string" }, | |
| "why_better": { "type": "string" } | |
| } | |
| } | |
| }, | |
| "priority_roadmap": { | |
| "type": "array", | |
| "items": { "type": "string" } | |
| } | |
| "job_title": { "type": "string", "minLength": 1 }, | |
| "resume_analyzed": { "type": "string", "minLength": 1 }, | |
| "overall_alignment": { "type": "string", "minLength": 1 }, | |
| "alignment_breakdown": { | |
| "type": "object", | |
| "required": [ | |
| "required_qualifications", | |
| "nice_to_have", | |
| "technical_skills", | |
| "soft_skills", | |
| "domain_knowledge" | |
| ], | |
| "additionalProperties": false, | |
| "properties": { | |
| "required_qualifications": { "type": "string" }, | |
| "nice_to_have": { "type": "string" }, | |
| "technical_skills": { "type": "string" }, | |
| "soft_skills": { "type": "string" }, | |
| "domain_knowledge": { "type": "string" } | |
| } | |
| }, | |
| "key_matches": { | |
| "type": "array", | |
| "minItems": 1, | |
| "items": { "type": "string" } | |
| }, | |
| "gaps": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "required": ["requirement", "your_status", "priority", "suggestion"], | |
| "additionalProperties": false, | |
| "properties": { | |
| "requirement": { "type": "string" }, | |
| "your_status": { "type": "string" }, | |
| "priority": { "type": "string", "enum": ["high", "medium", "low"] }, | |
| "suggestion": { "type": "string" } | |
| } | |
| } | |
| }, | |
| "tailoring_suggestions": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "required": [ | |
| "section", | |
| "current_bullet", | |
| "job_focus", | |
| "suggested_rewrite", | |
| "why_better" | |
| ], | |
| "additionalProperties": false, | |
| "properties": { | |
| "section": { "type": "string" }, | |
| "current_bullet": { "type": "string" }, | |
| "job_focus": { "type": "string" }, | |
| "suggested_rewrite": { "type": "string" }, | |
| "why_better": { "type": "string" } | |
| } | |
| } | |
| }, | |
| "priority_roadmap": { | |
| "type": "array", | |
| "items": { "type": "string" } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/resume-job-alignment/schema/output.schema.json` around lines 19 - 82,
The schema currently allows empty strings/arrays (e.g., "job_title",
"resume_analyzed", "overall_alignment", the properties inside
"alignment_breakdown", and array fields "key_matches", "gaps",
"tailoring_suggestions", "priority_roadmap"), which permits vacuous outputs;
update those required string properties (job_title, resume_analyzed,
overall_alignment, alignment_breakdown.required_qualifications, nice_to_have,
technical_skills, soft_skills, domain_knowledge, and all string props inside gap
items and tailoring_suggestions items) to include "minLength": 1, and add
"minItems": 1 to arrays that must return at least one element (key_matches,
gaps, tailoring_suggestions, priority_roadmap) so the validator rejects
empty/blank results while keeping existing "additionalProperties": false and the
"priority" enum unchanged.
| "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.", | ||
| "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')" |
There was a problem hiding this comment.
Fix the inaccurate word-count expectation.
The first sentence is not 47+ words; keeping that number in the expected output can make the eval reward a false finding. Use a non-numeric “long sentence” expectation or correct the count.
🧪 Proposed eval text fix
- "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
+ "expected_output": "JSONL with sentence improvements: sentence_length (long sentence → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.", | |
| "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')" | |
| "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.", | |
| "expected_output": "JSONL with sentence improvements: sentence_length (long sentence → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/sentence-clarity-cs/evals/evals.json` around lines 6 - 7, The
expected_output entry mistakenly asserts "47+ words" for the first sentence
length; update the "expected_output" value in the evals.json block so it no
longer contains the inaccurate numeric count—either replace "47+ words →
shortened" with a non-numeric descriptor like "long sentence → shortened" or
compute and insert the correct word count, and ensure the string still
references the other checks (weak_verb, ambiguous_pronoun) so the keys "prompt"
and "expected_output" remain consistent.
There was a problem hiding this comment.
7 issues found across 32 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="skills/academic-final-review-cs/SKILL.md">
<violation number="1" location="skills/academic-final-review-cs/SKILL.md:135">
P1: Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.</violation>
<violation number="2" location="skills/academic-final-review-cs/SKILL.md:250">
P3: This instruction conflicts with the output contract: checklist status is tri-state (`PASS`, `FAIL`, `WARN`), not binary. Reword it to avoid inconsistent status generation.</violation>
</file>
<file name="skills/sentence-clarity-cs/evals/evals.json">
<violation number="1" location="skills/sentence-clarity-cs/evals/evals.json:7">
P2: Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.</violation>
</file>
<file name="skills/paper-structure-cs/SKILL.md">
<violation number="1" location="skills/paper-structure-cs/SKILL.md:31">
P2: The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed `type` enum, leading to dropped or misclassified outputs.</violation>
<violation number="2" location="skills/paper-structure-cs/SKILL.md:60">
P2: The required-section definition is inconsistent: `missing_section` omits Introduction while other sections treat Introduction as required.</violation>
<violation number="3" location="skills/paper-structure-cs/SKILL.md:71">
P2: The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.</violation>
</file>
<file name="skills/resume-job-alignment/schema/output.schema.json">
<violation number="1" location="skills/resume-job-alignment/schema/output.schema.json:21">
P2: `overall_alignment` is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| ## LaTeX-Specific Checks (If Applicable) | ||
|
|
||
| If the paper is written in LaTeX, additionally check: |
There was a problem hiding this comment.
P1: Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/academic-final-review-cs/SKILL.md, line 135:
<comment>Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.</comment>
<file context>
@@ -0,0 +1,268 @@
+
+## LaTeX-Specific Checks (If Applicable)
+
+If the paper is written in LaTeX, additionally check:
+
+| Item | What to verify |
</file context>
| { | ||
| "id": 1, | ||
| "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.", | ||
| "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')" |
There was a problem hiding this comment.
P2: Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/sentence-clarity-cs/evals/evals.json, line 7:
<comment>Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.</comment>
<file context>
@@ -0,0 +1,20 @@
+ {
+ "id": 1,
+ "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
+ "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
+ },
+ {
</file context>
|
|
||
| ## Issue Categories | ||
|
|
||
| - `missing_section` - Required section (Abstract, Methods, Results, Discussion, Conclusion) not found |
There was a problem hiding this comment.
P2: The required-section definition is inconsistent: missing_section omits Introduction while other sections treat Introduction as required.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 60:
<comment>The required-section definition is inconsistent: `missing_section` omits Introduction while other sections treat Introduction as required.</comment>
<file context>
@@ -0,0 +1,581 @@
+
+## Issue Categories
+
+- `missing_section` - Required section (Abstract, Methods, Results, Discussion, Conclusion) not found
+- `section_order` - Sections in wrong sequence (e.g., Results before Methods)
+- `heading_skip` - Heading hierarchy violates standard (# → ## → ###, no skips to ###)
</file context>
| - Heading levels follow proper hierarchy (no # → ### skips) | ||
| - Bibliography present and complete | ||
| - Table of Contents (if present) matches actual sections | ||
| - Section transitions and flow are logical |
There was a problem hiding this comment.
P2: The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed type enum, leading to dropped or misclassified outputs.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 31:
<comment>The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed `type` enum, leading to dropped or misclassified outputs.</comment>
<file context>
@@ -0,0 +1,581 @@
+- Heading levels follow proper hierarchy (no # → ### skips)
+- Bibliography present and complete
+- Table of Contents (if present) matches actual sections
+- Section transitions and flow are logical
+- Heading naming conventions are consistent
+- Section balance and proportionality (no single 20-page Methods section, etc.)
</file context>
|
|
||
| ## How to Evaluate | ||
|
|
||
| 1. **Required sections**: Check for Abstract, Introduction, Methods, Results, Discussion, Conclusion |
There was a problem hiding this comment.
P2: The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 71:
<comment>The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.</comment>
<file context>
@@ -0,0 +1,581 @@
+
+## How to Evaluate
+
+1. **Required sections**: Check for Abstract, Introduction, Methods, Results, Discussion, Conclusion
+2. **Order**: Verify logical flow (Abstract → Intro → Methods → Results → Discussion → Conclusion → Bibliography)
+3. **Headings**: Ensure proper nesting (no jumps; consistent levels for parallel sections)
</file context>
| "properties": { | ||
| "job_title": { "type": "string" }, | ||
| "resume_analyzed": { "type": "string" }, | ||
| "overall_alignment": { "type": "string" }, |
There was a problem hiding this comment.
P2: overall_alignment is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/resume-job-alignment/schema/output.schema.json, line 21:
<comment>`overall_alignment` is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.</comment>
<file context>
@@ -0,0 +1,84 @@
+ "properties": {
+ "job_title": { "type": "string" },
+ "resume_analyzed": { "type": "string" },
+ "overall_alignment": { "type": "string" },
+ "alignment_breakdown": {
+ "type": "object",
</file context>
|
|
||
| ## Guidelines | ||
|
|
||
| - **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status. |
There was a problem hiding this comment.
P3: This instruction conflicts with the output contract: checklist status is tri-state (PASS, FAIL, WARN), not binary. Reword it to avoid inconsistent status generation.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/academic-final-review-cs/SKILL.md, line 250:
<comment>This instruction conflicts with the output contract: checklist status is tri-state (`PASS`, `FAIL`, `WARN`), not binary. Reword it to avoid inconsistent status generation.</comment>
<file context>
@@ -0,0 +1,268 @@
+
+## Guidelines
+
+- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
+- **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes)
+- **Completeness**: Perform the full checklist; don't skip items
</file context>
What Changed
Adds 7 new skills in two themed clusters, plus the manifest + CI wiring to make them installable via `./install.sh --profile full`.
CS academic writing (4 skills — JSONL output, ajv-validated)
Resume building (3 skills — single JSON object output)
Shared helper
Per-skill infrastructure (every skill ships)
Install manifest + CI (this commit only)
Why This Change
Two gaps in the current ECC surface:
Every skill ships with a strict JSON schema and 3 evals so outputs are machine-checkable and regression-tested.
Testing Done
Eval results (pass@3, live skill invocations)
Schema validation (ajv strict):
21 trials · 99 objects · 100% schema-valid · pass^3 = 7/7 skills · rubric spot-check 7/7
Install verification
`./install.sh --profile full` from this branch now copies all 7 skills + `_shared/` to `~/.claude/skills/`. Prior to this PR, the manifest-driven installer silently skipped them.
Type of Change
Security & Quality Checklist
Documentation
Commits on this branch:
Summary by cubic
Adds 7 new skills for CS academic writing and resume building, each with strict schemas and evals, and wires them into the installer so
./install.sh --profile fullincludes them (academic-writing also in research). Increases the skill count from 183 to 190.New Features
sentence-clarity-cs,abstract-methods-results-cs,paper-structure-cs,academic-final-review-cs.harvard-resume-validator,kickass-resume-validator,resume-job-alignment.skills/_shared/resume-common.md.Refactors
additionalProperties: false, closed enums) for all skills; examples and 21 evals validate underajv.academic-writing,resume-toolkitand componentscapability:academic-writing,capability:resume-toolkit; enabled infull(both) andresearch(academic-writing) profiles.package.jsonfiles allowlist;scripts/ci/validate-skills.jsnow skips underscore-prefixed dirs; README/AGENTS counts updated to 190 skills (EN + zh-CN).Written for commit 633edc0. Summary will update on new commits.
Summary by CodeRabbit
Release Notes
New Features
Documentation