Secret backdoor to many websites!! Browse bureeeeee!!

Dec 3, 2011
55
9
UKIELEWA HII KITU USISITE KUTUSAIDIA NA SISI!!!
jamani wanajamii kuna post ambayo nadhani wote ambao bado hamjaisoma mtaipenda
tatizo ni kwamba nimeshindwa kuielewa vizuri ndio maana nimeamua kuimwaga kwenye jukwaa hili la wataalamu ili nipate picha halisi ya jinsi ya kuzama kwenye site zilizo na restriction! kitu chenyewe hichoooo!

Secret Backdoor To Many Websites





Ever experienced this? You ask Google to look something up; the engine returns with a number of finds, but if you try to open the ones with the most promising content, you are confronted with a registration page instead, and the stuff you were looking for will not be revealed to you unless you agree to a credit card transaction first....



The lesson you should have learned here is: Obviously Google can go where you can't.

Can we solve this problem? Yes, we can. We merely have to convince the site we want to enter, that WE ARE GOOGLE.

In fact, many sites that force users to register or even pay in order to search and use their content, leave a backdoor open for the Googlebot, because a prominent presence in Google searches is known to generate sales leads, site hits and exposure.
Examples of such sites are Windows Magazine, .Net Magazine, Nature, and many, many newspapers around the globe.

How then, can you disguise yourself as a Googlebot? Quite simple: by changing your browser's User Agent. Copy the following code segment and paste it into a fresh notepad file. Save it as Useragent.reg and merge it into your registry.

CODE
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent]
@="Googlebot/2.1"
"Compatible"="+http://www.googlebot.com/bot.html"

Voila! You're done!

You may always change it back again.... I know only one site that uses you User Agent to establish your eligability to use its services, and that's the Windows Update site...
To restore the IE6 User Agent, save the following code to NormalAgent.reg and merge with your registry:

CODE
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent]
@="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"

Ps:
Opera allows for on-the-fly switching of User Agents through its "Browser Identification" function, while for Mozilla/FireFox browsers a switching utility is available as an installable extension!

au kama vipi tembelea hii linkI TECHNOLOGIES: Secret Backdoor To Many Websites

nachoomba wanajamii ni kama ntu ananjia rahisi na ya kueleweka ya jinsi ya kuziingiza hizo code pamoja na kumerge kwenye regisry basi aokoe jahazi! kuna vitabu vya u hakika kwenye site flani but ukitaka kuaccess lazima u sign up au sign in! wanataka mkwanja mrefu ila si mnajua uchumi hauruhusu!
kazi kwenu!!
 
Ni rahisi..
Kwenye desk top au popote pale
Right click mouse yako - kisha nenda TEXT DOCUMENT
ifungue
Copy hizo code kama zilivyoandikwa kisha paste kwenye TEXT DOCUMENT
kisha funga na save changes

Kama unatumia window 7.
Nenda START kisha type EXT
Halafu chagua SHOW OR HIDE FILE EXTENSION
Kisha unmark HIDE EXTENSION FOR KNOWN FILE halafu apply

Sasa rudi kwenye ile TEXT document yako.
Rename kwa kufuta (.txt) halafu andika (.reg) save changes ..
Kisha double
kwishney
 
mhhh so utakuwa umeacchive nini . sijaelewa hapa.?ebu fafanueni wataalam
yaani kuifanya pc iwe googlebot imeshakuwa dili. hiyo backdoor aacess utakayopata kwenye web itakuwezesha kufanya nn cha ajabu ? just reading?????
 
Kwasisi MAC users wenye OS za Leopard na Lion tuna chochote hapo?
 
inavyoonekana hapo uki apply hizo configuration utaifanya pc yako ionekane kama google, hivyo unaweza kupata service zao free without any payment or registration
 
Naomba nichangie kwa ufahamu nilionao.
Googlebot ni program inayotumiwa na Google kukusanya data kutoka katika websites mbalimbali na kuzipanga. Sasa webmasters huwa wanaruhusu Googlebot kusearch mpaka ndani ya page zake za kulipia ambazo user wa kawaida huwezi kuzi access bila kuzilipia.
Dhumuni ya kuacha hivyo ni kupandisha chati ya website kwenye page ya result.
Mfano: Akiifungia Googlebot, mtu akisearch kitu, hiyo websote inaweza ikawa inapatikana kwenye google results lakini page ya 30.
Akiifungulia Googlebot website inaweza ikashow up kwenye google results page ya 5.
Sasa kwa sababu website hizi ni business, webmaster hawezi kuifungia , maana user gani atafungua results za google mpaka page ya 35? Na hapo ndipo wajanja wanatumia huo upenyo, kupandikiza hizo code hapo juu, kujifanya wao wana access website kama Google (hivyo hawatakuwa blocked na ishu za kulipia)
 
Naomba nichangie kwa ufahamu nilionao.
Googlebot ni program inayotumiwa na Google kukusanya data kutoka katika websites mbalimbali na kuzipanga. Sasa webmasters huwa wanaruhusu Googlebot kusearch mpaka ndani ya page zake za kulipia ambazo user wa kawaida huwezi kuzi access bila kuzilipia.
Dhumuni ya kuacha hivyo ni kupandisha chati ya website kwenye page ya result.
Mfano: Akiifungia Googlebot, mtu akisearch kitu, hiyo websote inaweza ikawa inapatikana kwenye google results lakini page ya 30.
Akiifungulia Googlebot website inaweza ikashow up kwenye google results page ya 5.
Sasa kwa sababu website hizi ni business, webmaster hawezi kuifungia , maana user gani atafungua results za google mpaka page ya 35? Na hapo ndipo wajanja wanatumia huo upenyo, kupandikiza hizo code hapo juu, kujifanya wao wana access website kama Google (hivyo hawatakuwa blocked na ishu za kulipia)

mkuu inamaana hata kwenye forums wanaandika google bot wanakuwa wanamaana hiyo au kwa upande wa forums ni tofauti?..
 
Du aksante kwa kutujuza nitajaribu j3 nikienda ofisini
mkuu una kibarua! nimejaribu kama mtaalam alivyoelekeza hapo juu! aisee nilipojaribu site flani yani jambo la kwanza kuambiwa ni register first ili u access! nimeshondwa kuelewa kama inaapply kwa browser zote au firefox peke yake! manake nimejaribu kwa google chrome bado ikazingua!!
 
Ni rahisi..
Kwenye desk top au popote pale
Right click mouse yako - kisha nenda TEXT DOCUMENT
ifungue
Copy hizo code kama zilivyoandikwa kisha paste kwenye TEXT DOCUMENT
kisha funga na save changes

Kama unatumia window 7.
Nenda START kisha type EXT
Halafu chagua SHOW OR HIDE FILE EXTENSION
Kisha unmark HIDE EXTENSION FOR KNOWN FILE halafu apply

Sasa rudi kwenye ile TEXT document yako.
Rename kwa kufuta (.txt) halafu andika (.reg) save changes ..
Kisha double
kwishney
mkuu hilo file la user agent nimeliacha kwenye desktop! nimefuata kama ulivyoelekeza! cha kushangaza bado imezingua! au nilitakiwa kumerge kwenye registry? niliamua kuacha coz ulisema niache popote pale!!
 
Naomba nichangie kwa ufahamu nilionao.
Googlebot ni program inayotumiwa na Google kukusanya data kutoka katika websites mbalimbali na kuzipanga. Sasa webmasters huwa wanaruhusu Googlebot kusearch mpaka ndani ya page zake za kulipia ambazo user wa kawaida huwezi kuzi access bila kuzilipia.
Dhumuni ya kuacha hivyo ni kupandisha chati ya website kwenye page ya result.
Mfano: Akiifungia Googlebot, mtu akisearch kitu, hiyo websote inaweza ikawa inapatikana kwenye google results lakini page ya 30.
Akiifungulia Googlebot website inaweza ikashow up kwenye google results page ya 5.
Sasa kwa sababu website hizi ni business, webmaster hawezi kuifungia , maana user gani atafungua results za google mpaka page ya 35? Na hapo ndipo wajanja wanatumia huo upenyo, kupandikiza hizo code hapo juu, kujifanya wao wana access website kama Google (hivyo hawatakuwa blocked na ishu za kulipia)

Nafikiri umechanganya vitu viwili hapa ambavyo ni content ya website kuwa searchable na kitu ambacho kinaitwa search engine optimization (SEO).

Kabla sijaendelea naomba kutoa tahadhari kwa watu wanaosoma hii thread. Msikimbilie kufuata maelezo haya kwani mnaweza kujikuta mkiharibu computer zenu. Registry ni kitu muhimu sana na hata wanaojiita wataalamu wa kompyuta wengi hawajui namna hii kitu inavyofanya kazi. Hivyo tahadhari kwenu.

Baada ya warning hiyo nitoe maelezo machache na rahisi kuhusu searching. Kwa uelewa wangu kila content ya website/page inakua na anwani au URL. Sasa kila anuani/page inakua na kitu kinaitwa meta tags informations. Search engines au crawlers au bots huwa na access na hizi meta tags, hivyo huzisoma na ku zi index against the URL. So search engines hazisomi na hazina access na Content za page fulani bali hu index page yako kwa kutumia meta tags.

How it works.
unapotengeneza web page let say inayoongelea aina za madini mbali mbali unaweza kutumia some keywords kama dhahabu, tanzanite, rubi, etc. Search ingine itasoma haya maneno na kuyaindex against your page. Mtu anapotype kwenye seach ingine let say dhahabu, page yako ita show up kama moja ya page ambazo zinapendekezwa kwa msomaji.

Sasa swala page kuonekana juu au chini na jinsi unavyocheza na hizo keywords na phrases kwenye meta tags. Technique hii ndo huitwa search engine optimization. Pia seach engine hufanya biashara ya keywords. unawalipa let say google kutumia keywords flani say holiday. So mtu anapotype hilo neno website za kwanza kutokea zitakua ni zile zilizolipia hilo neno na mara nyingi hutokea kwenye yellowish background pale juu. Baada ya hapo ndo hufuata website za kawaida ambazo hazijalipiwa.

User agent kazi yake ku kutoa utambulisho wa client (browser in this case) kwa web server pale unapo omba kufungua ukurasa fulani. Kwa mfano web server inahitaji kujua ni aina gani ya browse, mozilla, opera etc, version etc ili iweze ku panga (render) muonekano wa ukurasa wako kwa jinsi browser yako inavyotaka.

Utakuwa umewahi kusearch information kwenye search engine ukawa unapata website ambazo wakati mwingine hazihusiani kabisa na topic unayosearch, wakati mwingine website za ngono au content aggregators website ambazo badala ya kukupa content ulizokua unatarajia huishia nazo ku suggest links nyingine.

Hii hutokea kwa sababu kuna technique nyingine huitwa cloaking, hutumika kudanganya search ingine kwamba website hii ina taarifa fulani na kumbe haina. Sasa baadhi ya website zinapotambua kuwa search ingine ina zi search/craw zina run script ambayo inazipa taarifa nyingi za uongo ambazo hata haziko kwenye page hizo. Crawler nayo huishia ku zi index as if ni kweli zina content hizo.

Hivyo waweza ku edit uer agent string na kuongezea googlebot, ili u surf kama crawler ingawa uhakika wa kupata information za kulipia au ku bypass registration ni mdogo sana kwa njia hii.

Hii ni kwa kifupi na katika plain language ili muelewe.
 
Habari, naomba kwa kifupi niongezee aliyosema iMind hapo juu, Kwanza nirudie angalizo la kuedit Registry, fanya hivyo at your own risk. Kingine tu ingawa ni nje ya topic pale juu ni kwamba pamoja na kwamba google na search engine zone tunaweza kuzipa taarifa fake (cloaking) juu ya nini kipo kwenye website/webcontent kupitia meta tags na keywords tayari search engine kubwa kubwa kama google zimeshaanza kuangalia jambo hili na kama keywork ulizotoa zinafanana na content zilizopo kwenye web yako, mfano kama unazungumzia wanyamapori halafu umeweka keyword zinazohusiana na madini -wakati huo hujazungumzia madini kwenye web yako basi ni dhahiri website yako itatupwa mbali sana kwenye rank za google.
 
Back
Top Bottom