Still Developing Software 10+ years since my last post

April 5th, 2021 No comments

To find most of my work, you should look at my GitHub and GitHub gist accounts below. I’m also ilikenwf there.

ilikenwf GitHub

ilikenwf Gists

Meanwhile, my non-programming site was formerly based on Joomla, and is thus unsecure. I’ve got a migration path but haven’t gotten around to carrying it out or working over a complete template for it.

My possibly blank personal site – I’ll be updating this eventually.

Categories: Uncategorized, Useful Scripts Tags:

Free C# Class – iliPing – Threadable, Fast, No External Classes Required

August 23rd, 2010 No comments

Before I give you another free C# class, I thought I may pass along a bit of a ProTip: it seems to me that the easiest money I make is from TextLinkAds. TLA capitalizes on the fact that spiders at least visit your site, even if real humans don’t. Join and give it a try.

This is a good class, and works well if you thread it. It works without using any other XML-RPC classes externally, and the only site it doesn’t work for me is sites that use nonstandard XML-RPC parameters like autopinger. I usually set it up to just go through a list of sites/blogs with their titles, and a list of XML-RPC ping service URI’s. I also thread this, and can ping quite quickly. This should be compatible with both .NET Framework, and my personal favorite, Mono (check out MonoDevelop sometime!).

Feel free to use this however and modify it…if you make any changes, please pay it forward and share them with everyone else so they can use the improvements too, since this is opensource.


/* iliPing - (C)2010 by ilikenwf, http://www.ilikenwf.com
* Feel free to use and/or modify this for any purpose...
* Please pay it forward and share any improvements you make
* Based on the XML-RPC pinging in C# class from
* http://mboffin.com/post.aspx?id=1613 modified and broken up */


using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;
using System.Xml;
using System.Xml.XPath;
using System.Xml.Serialization;
using System.Net;

namespace pinger
{
///

/// Sends an XML-RPC ping for the given blog, blog url, and ping service url.
///

public class pingRequestor : IDisposable
{
public string pingRequest(string blogname, string blogurl, string pingurl, bool verbosity,
int httpTO) {

string output = string.Empty;
try {
HttpWebRequest webreqPing = (HttpWebRequest)WebRequest.Create(pingurl);
webreqPing.UserAgent = "WordPress 3.0.1";
webreqPing.Timeout = httpTO;
webreqPing.Method = "POST";
webreqPing.ContentType = "text/xml";

// Get the stream for the web request
Stream streamPingRequest = (Stream)webreqPing.GetRequestStream();

// Create an XML text writer that writes to the web request's stream
XmlTextWriter xmlPing = new XmlTextWriter(streamPingRequest, Encoding.UTF8);

// Build the ping, using the BlogName and BlogUrl
xmlPing.WriteStartDocument();
xmlPing.WriteStartElement("methodCall");
xmlPing.WriteElementString("methodName", "weblogUpdates.ping");
xmlPing.WriteStartElement("params");
xmlPing.WriteStartElement("param");
xmlPing.WriteElementString("value", blogname);
xmlPing.WriteEndElement();
xmlPing.WriteStartElement("param");
xmlPing.WriteElementString("value", blogurl);
xmlPing.WriteEndElement();
xmlPing.WriteEndElement();
xmlPing.WriteEndElement();

// Close the XML text writer, flusing the XML to the stream
xmlPing.Close();

// Send the request and store the response, then get the response's stream
HttpWebResponse webrespPing = (HttpWebResponse)webreqPing.GetResponse();

StreamReader streamPingResponse = new StreamReader(
webrespPing.GetResponseStream());
XmlDocument response = new XmlDocument();

// Store the result in an XmlDocument for parsing if verbosity is off
if (verbosity == false) {
response.LoadXml(streamPingResponse.ReadToEnd());
} else {
output = streamPingResponse.ReadToEnd();
}

// Close the response stream and the response itself
streamPingResponse.Close();
webrespPing.Close();

if (verbosity == false)
{
// Check the response to determine success or failure
XmlElement flerror = (XmlElement)response.SelectSingleNode("//boolean");
switch (flerror.InnerText)
{
case "0":
output = "[Success] ";
break;
case "false":
output = "[Success] ";
break;
default:
output = "[Failure] ";
break;
}
}
} catch {
// Timeout here is somewhat generic and nonspecific
output = "[Timeout] ";
}
// Return the result
return output;
}

public void Dispose() {
}

}
}

iWeb Correction Script – A shellscript for OSX

August 7th, 2010 No comments

I wrote this for use by a client who uses iWeb. It does a number of things, the most important being that it sets the site to use the domain root (or whatever folder you dump it into) instead of being in a Website_Files folder that gets redirected to.

It also renames the Welcome/Home/whatever the frontpage is called.html to index.html, lowercases all page file names, and appropriately makes changes in feed.xml and all html files to reflect the changes.

Finally, it adds meta keywords and descriptions (which you need to edit into the script for your site).

This is my first time dealing with OSX in this respect, so if you want to improve it go ahead…let me know if you would, please. I don’t own a mac, but I made this on a mac so that the client could use it…It works, even if it’s not entirely elegant. (I’m a linux guy – if it was GNU bash it would be a lot nicer code!).

To use, publish your iWeb site into a folder, copy corrector.sh into said folder, and DO NOT EXECUTE THE SCRIPT FROM FINDER. Open a terminal, cd into the location of the iweb files/folders and corrector, and run sh corrector.sh from the command line. From there it does all the work, then you just upload everything to your host.


#/bin/sh

echo "iWeb corrector - put the site in the domain root!"
echo "Fully free and opensource software by ilikenwf, (C) 2010"
echo "http://www.ilikenwf.com"
echo "FOR USE ON THE OSX O/S ONLY!"
echo "-------------------------------------------------------"
echo " "
echo "Please type the name of the site folder (case sensitive): "

read sitefolder

echo " "
echo "Please enter the name of your front/main page, like Welcome or Home (case sensitive): "

read index

rm ./index.html
rm ./assets*

mv ./"$sitefolder"/* ./
rm -rf ./"$sitefolder"

rm ./index.html

mv ./"$index".html ./index.html

for i in *.html
do
mv $i `echo $i | tr [:upper:] [:lower:]`;
sed -i "" -e s/"$i"/`echo $i | tr [:upper:] [:lower:]`/g *.html
sed -i "" -e s/"$i"/`echo $i | tr [:upper:] [:lower:]`/g feed.xml
done

sed -i "" -e s/"$index"\.html/index\.html/g *.html
sed -i "" -e s/"$index"\.html/index\.html/g feed.xml

# Metatags - edit the text in between the "" for description
# and keywords, respectively.
# Using perl here because OSX's sed command
# Doesn't play nice when inserting newlines
# When you do this, make sure to escape special characters like commas
# eg "keyword1\, keyword2\,"
perl -pi -w -e 's/UTF\-8"\ \/\>/UTF\-8"\ \/\> \n \\n \/g;' *.html
Categories: Useful Scripts Tags: , , , , , , ,

Free C# Web Reqest Class – LazyHTTP – cURL like Functionality

December 29th, 2009 No comments

Before I give you the free C# class, I thought I may pass along a bit of a ProTip: it seems to me that the laziest source of income I get is from TextLinkAds. This is just a little protip, but you should really join TLA and slap their plugin onto all of your accepted blogs. You will make money…without doing anything. I’m sick and tired of hearing people complain their sites are indexed but not making money. TLA capitalizes on the fact that spiders at least visit your site, even if real humans don’t. Join and give it a try.

See the class below. If you can’t understand how to use it, you may want to brush up on C# … or get a book or something. Here is LazyHTTP (special thanks to nuls on the Syndk8 Blackhat SEO Forums for cleaning this up):


/* LazyHTTP Lazy Request Class. (c) 2009 by ilikenwf http://www.ilikenwf.com
* Special thanks to nuls of the syndk8 forums at http://forum.syndk8.net
* feel free to copy, modify, share, and distribute as you wish,
* and to include this in any projects you may have, commercial or not.
* please just leave the credits line up above.
* In other words, consider this licensed under the GNU Public License */


using System;
using System.Net;
using System.IO;
using System.Text;

namespace LazyHTTP
{
public class LazyClient : WebClient, IDisposable
{
public new string DownloadString(string address)
{
string output = String.Empty;

try {
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(address);
request.UserAgent = @"YOUR USERAGENT STRING HERE";
request.KeepAlive = false;
request.Timeout = 15 * 1000;

HttpWebResponse res = (HttpWebResponse) request.GetResponse();

using (StreamReader streamReader = new StreamReader(res.GetResponseStream())) {
output = streamReader.ReadToEnd();
}
} catch {
// Error Hiding - Weeee!
}
return output;
}

public void PostAction(string address, string postString)
{
try {
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(address);
req.Timeout = 3000;
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ServicePoint.Expect100Continue = false;
byte[] postBytes = new UTF8Encoding().GetBytes(postString);
req.ContentLength = postBytes.Length;
using (Stream stream = req.GetRequestStream()) {
stream.Write(postBytes, 0, postBytes.Length);
stream.Close();
}
req.GetResponse().Close();
} catch {
// Error Hiding - Weeee!
}
}

public void Dispose()
{
// Needed for IDisposable Interface
}
}
}

Wow, SEO Is Time Intensive Sometimes

September 5th, 2009 No comments

Echoed from my other blog… in actuality, I’ve come to the conclusion that most things are time consuming.

I’ve been working on a large array of projects lately, and all seem to take up time. I’m back in school, but that really doesn’t hinder me that much except for the time I spend in class…furthermore, I make good use of the 30 Megabit Down / 7 Megabit up internet connection.

WordPress updates and plugin updates were first on the docket. That took plenty of time across my sites, and then I proceeded to play with some new plugins like CosHTMLCache (which broke RSS on a couple of the sites so I reverted to WP-Super-Cache), and I also have started using twitterposter to promote my posts, which has made a good increase in traffic and AdSense clicks.

After that I have just been tinkering and considering whether I should put up more sites, or try to start working on my all in one blackhat SEO command center in an effort to automate as much as I can from site creation, to promotion, to maintenance and tracking, as well as ad placement and management (probably integrate OpenX in some way for that one….).

All in all, I feel I’m on the upswing, though I still stink at getting many CPA conversions…I just don’t understand what I’m doing wrong. I’d say the techniques I hear of are often so old that they are saturated or dead, so I guess I need to find a place that doesn’t have a bunch of whining n00bs, and enough activity to actually warrant me joining and discussing techniques that work.

In the end, it’s all more of a playground than anything…I still have tons of unused domains to do something with…I’ll never run out of things to do.

Autopligg 2.5.1 Released

February 4th, 2009 No comments

A new version has been released with tons of fixes and 100 thread capability. It also has a slicker interface, and really runs well.

The coupon code here still applies, use it if you buy and save $40! See toolshed.syndk8.net for more details. Buy it and get a free super secret software that posts links on other digg clones, for free. Furthermore, the private forums feature gigantic lists of pligg sites, all for you to add to your Autopligg database for even more linking power. Use the code below and get them! Free updates for life!

PRIVATE40OFF

Categories: Uncategorized Tags: ,

$40 Off Coupon Code for Autopligg

January 29th, 2009 No comments

Just enter PRIVATE40OFF at checkout. Make sure to click apply!

Get Autopligg here.

Categories: Uncategorized Tags: ,