Talk:Digital Signal 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Large block of unsigned op-ed comment[edit]

I'm a first time editor here, so I hope I'm going about this in the right way. I'm a jack of all trades but a master of only 1, T1 that is. As one of the original 5 T1 carrier sales engineers beginning in 1966, there are many things I can clear up that are anything but clear as we view things today. T1 is truly an amazing resource, less appreciated than it should be, and still capable of achieving more than any other available means of serving the public hunger for integrated services by digital network means. Why ISDN has been so poorly promoted is the primary reason why it hasn't been more popular with the public.

In the early days of T1 deployment it represented an enormous leap forward to applications engineers, cutting the inplace cost of comparable analog systems by a factor of 3, a gap that preceded what we are accustomed to with silicone based modular systems technologies like PCs and home media equipment that gets better and cheaper with each new model that comes along. Meanwhile, analog systems were less well suited to large scale integration of functionality, and by comparison their manufacture escalated costs while the T1 alternative products got cheaper and better day by day.

There were also very significant operational benefits to T1 so long as it was properly deployed, and it took a few years for expertise to catch up with feasibility. But it didn't take long to learn that T1 was worth the effort. By 1970 T1 was among the hottest commodities being sold in the telecom hardware biz, and the rapid evolution of technology and newly learned techniques accelerated acceptance, but leapfrogging became a problem that lingers to this day....

Designers foresaw the need for an aggregation hierarchy which theoretically became known as T2, T3, T4/5, etc. As T1 became the popular choice of technologies the plan was to design and build new cables that would be installed to support T2, a signal that presented a much more difficult transmission challange than the 772 kilohertz wave envelope emitted by T1. Before these cable designs could be mastered the engineering labs were already making noise about optical fiber, so the future seemed headed in that direction and away from copper twisted wire. Interim experimentation with coax and broad guage twisted pairs was conducted during the '70s. The original T2 thru T5 hierarchy was expected to be as follows:

          T2=T1x2= 48channels=3.152MBS= 1.5Megahertz
          T3=T2x2= 96channels=6.312MBS= 3.2Megahertz
          T4=T3x2=192channels=   13MBS= 6.5Megahertz
          T5=T4x2=384channels=   26MBS=  13Megahertz

Back when these projections were being contemplated the highest bundling of 4Khz VF channelized multiplex bundles were conveyed either on coaxial tubes or microwave basebands, with the largest bundles known to man at the time being were Master Channel Groups of 300 channels each. As pragmatists began to attempt to configure twisted wire cables that could support the first aggregating step, T2, they struggled with the regenerator capability to pass analog signal envelopes of 3MHz or better through repeaters spaced far enough apart to make be operationally and economically feasible. Meanwhile GTE's transmission manufacturer Lenkurt Electric produced an optical T2 array for electric power companies that operated on fiber optic cable, and Collins Radio installed beta tests of a comparable system using 2 parallel coaxial cables of the kind being used by the fledgling cable TV industry.

Frustration offset success and the industry at large began to recognize that the digital future was optical, and that it was quite a ways off. In the interim T1 popularity continued to grow for a broadening set of reasons, not the least of which was the elimination of frequency sensitive channelizing elements. This in turn allowed channel differentiation to be reassigned to a broadening array of signal interface types, which meant that a T1 channel bank could be configured to host many types of circuit applications on a plug and play basis. The economic and operational impact of this was enormous, but is seldom mentioned by writers of T1 history.

The frame packaging of the 1.544MB 24 CH T1 bundle underwent refinements during this evolutionary period, and channel bank packaging nomenclature known as D types became intermingled with framing definitions. There has been considerable discussion of D1, D1A, D1B, etc., but little mention of the fact that channel bank configuration changes were also part of the ID meaning. By the time we got to D3/D4 we understood SuperFrame bitstream references but much confusion remains about the fact that D4 also implies twin channel banks, or 2xT1 bundles with latest bit frame organization. Not many people realize that ESF is D5 framing. These subtleties only matter if you really want to know why T1 history can be so confusing to people who didn't actually deal with it daily.

My best attempt to curtail this longwinded dissertation will be done here. T1 is arguably the most signficant network transmission protocol in the development of the internet, not to say that TCP/IP is less important, but to simply suggest honest assessment of how digital media have actually evolved. It is also important to recognize that T1/E1 continue to hold great future potential for those who want to delve into how to take a couple of megabits of data transport capacity and find the most practical ways to do positive things by available means. Thanks for your attention; there's a lot more detail in this history if people are interested.


I just have a minor issue about the "Trivia" Section. Yes, T1 and DS1 are quite often used in an interchanble manner. However; T1 actually identifies the physical interface while DS1 is in reference to singnaling. It may be common, but to be blunt; it is more telco slang than anything else to use them in an interchangable manner.


Sorry, don't know who posted the orginal commented here, it wasn't signed.

I see your point, but many companies (Avaya being one of them) use DS1 as their interface indication, it's not so much slang as the context in which it is used.

What I do have a problem with are the pictures in this article, they are very confusing to non telco people, please have a better description of them within the article or find some better easier to understand pictures. Also, would someone mind adding an alarm section or reference an article to the various alarms (red, yellow, blue).

Thanks!

63.87.170.72 17:28, 7 February 2006 (UTC)[reply]

Physical Medium[edit]

How does the physical medium look like? I think its good to have a distict photo of the cross section and terminator jacks, male and female sides. Over what max distance the signal can be tranmitted through such cables?

(response) I agree it would be good to include the physical medium information, but maybe under the "physical medium" link. The answer is pretty complex, since T1 as the end user knows it is probably most often an RJ-45 jack and ALBO (Automatic Line Build Out) levels. Within communication sites it is more often DSX-1 style and level. There is also a lot of DB-15 DSX level interface today.

The RJ-45 interface really should be identified in the same location as the physical medium for 56 Kbps DDS and (n)base-T, since the pinouts are coordinated with those services. Does that sound right?

Flagmichael 01:49, 6 March 2007 (UTC)[reply]

Formal Tone[edit]

22-Aug-2006: This important, technical article (about common Internet technology DS1) was quickly reworded in formal tone by removing "you" and replacing prepositions them/they with nouns, replacing "where as" with "whereas" and using formal grammar and punctuation. The Notes section was added to cite sources in "<ref\>" tag footnotes, per Wikipedia:Guide_to_layout (The "References" section is more tedious, requiring alphabetic by authors). The alternate abbreviation "T-1" was changed to match "T1" wherever used. The diagram errors in Figure 2 ("DS1 SF Frame") were image-edited/uploaded to correct final Terminal bits as "01" + new caption. Beware of other grammar or punctuation problems still in this complex article. -Wikid77 18:33, 22 August 2006 (UTC)[reply]

Robbed bit signaling and digital cross-connection[edit]

I think it would be good to add a sentence to the SF Bit Robbing paragraph noting that since CAS is frame specific, cross-connections on a digital level with equipment known as a Digital Access and cross-connect system (which see) will lose the signaling bits unless properly handled. Flagmichael 02:00, 6 March 2007 (UTC)[reply]

Connectivity and Alarms[edit]

I added this content today. As I am new to Wikipedia editing, feel free to fix it up as necessary!

Flagmichael 18:28, 25 March 2007 (UTC)[reply]

Generally known N-carrier[edit]

Anybody care to start an article on that topic? Jim.henderson 03:20, 22 September 2007 (UTC)[reply]

DS1 vs DSL Answer part 1 of 4 Ignoring anything but the technology, DSL seems to get higher bandwidth per line than DS1, by a very large factor; yet I still see people talking about and using DS1. What's the tradeoff? Range? Anaholic (talk) 15:20, 19 February 2008 (UTC)[reply]

There are quite a few trade-offs when choosing DS1 over DSL. I will touch on one of those that is of importance to a majority of users. The first factor when determining if DS1 is worth the investment is having dedicated Internet service. While it may have been the practice of some providers to split up DS1 channels for multiple customer accounts say around 10-15 years ago and probably even some Tier 3 service providers may still participate; the largest and easily recognized providers, specifically all Tier 1 providers in the U.S., sell their DS1 circuits to end-user customers in order to provide dedicated Internet service which means that is exactly what the end-user is provided.
Going off the assumption that the DS1 was provisioned for full circuit access and port speed, the end user would have dedicated bandwidth of 1.536 Mbps downstream and 1.536 Mbps upstream as well. Now, I know you might be thinking it's a full 1.544 Mbps, but as another gentleman pointed out on this talk page it can not be technically feasible, so I will move on.
Obviously, having dedicated Internet service for your business or even home, has significant tradeoff because no matter how much bandwidth your DSL line may have it is still a shared Internet service in which all of the surrounding DSL customers share that higher bandwidth with you. Just think about how slow your DSL can become when your on it at home around 5:00 PM. That's due to all of the other users coming home from work and jumping online. And, in a commercial area where the DSL nodes maintain a majority of those commercial accounts as well, you'll notice an inconsistency in speed. Everyone with DSL in your area with the same provider is competing for the same bandwidth at the same time.
The best analogy I can give is to image your DSL as the four lane main highway during rush hour. Sure, those lanes give you an exorbitant amount of width, but it's evening rush hour so it is going to take longer for your packets of data to get where they need to go. And, let's not forget that TCP/IP is a very tricky protocol. Each packet of data you sent has to assemble itself in a specified order. So, say one of the lanes data packet three is in suddenly becomes clear for a few miles. Well, data packet one and two are still stuck in their lanes so data packet three's gain is useless. His packets must be assembled in an exact order or your data would be of no use to anyone. So, you must sit and wait.
But do you notice the HOV lanes are open, but only one car is allowed on. This car may have just one lane than four, but it is his own personal highway with all of his data packets along for the ride, waiting to be assembled in exact order. It's a good thing too because all the end-users will be sending and receiving packets over the Internet all day long. --24.127.70.9 (talk) 10:19, 9 July 2010 (UTC)[reply]

T-span[edit]

A T-span is a portion of the T1 physical wiring between two central offices. Multiple spans are connected together to complete the end to end connection. Repeaters are required between two spans. Spans are typically built to support 25 T1 lines due to cable and repeater case design. The term span can also refer to the full complement of the 25 individual spans. The spans may or may not be "equipped". That is to say the cabling may be in place and connected to the repeater cases with the repeaters installed or the repeater cases may be (partially) empty. Obviously, an individual complete (end to end) span must be equipped to support a working T1. These concepts are not well known as they are only relevant to T1 service providers. Also, spans are becoming less common because T1s are typically provisioned long distance on higher level fiber based systems such as SONET or as a psuedo-wire service on an IP network. Bellhead (talk) 17:57, 2 January 2009 (UTC)[reply]

Oversights[edit]

I'm also a first time editor/contributor. There appears to be several oversights in figure 2, which explains the DS-1 framing format.

Figure 2 seems to have reversed the identification of the terminal framing and signal framing bits.

The odd frames in figure 2 (pink) appear to be mis-identified as the signal framing bits (should be terminal framing).

The even frames in figure 2 (tan/orange) appear to be mis-identified as the terminal framing bits (should be signal framing).

Also, the bit pattern used in figure 2 for the even frames appears to be incorrect. The signal framing bit pattern in the text and in other sources is "001110", ie, the signal frame is identified by the transition from 0 to 1 and from 1 to 0.

Although this seems to be a pretty clear oversight, as a first time editor, I am reluctant to make any changes without hearing feedback from others. (I would also need advice on how to edit the .png image. If I understand correctly, .png images are bit maps, which would make editing more difficult (at least for me) than if the image were available in some other format.)

Lifsorg61 (talk) 22:02, 31 March 2009 (UTC)[reply]

For starters, don't top-edit talk pages, select new section instead when starting a new topic. — Dgtsyb (talk) 06:41, 1 April 2009 (UTC)[reply]

Figure 3 (ESF) very misleading? Shouldn't the Framing bits really be in column 1 (i.e. the left) and the data bits in 2-193? That way the CRC would be computed over the frame diagram from the first Framing bit (row 1, col 1)to the last data bit (row 24, col 193). As it is now the CRC needs to be computed from the row 1, col 193 to row "25", col 192. Having framing in column 1 would also make it consistent with G.704 sect. 2.1.3.1. (I think it would also get the robbed bits in the right place as well.) —Preceding unsigned comment added by 192.31.192.5 (talk) 19:22, 11 April 2010 (UTC)[reply]

Inband T1 versus T1 PRI[edit]

The PRI versus inband T1 better/worse argument is a mute point. PRIs weren’t designed to be “faster” than inband T1s. (In fact, for a single T1, it is arguably not true based largely on the average call holding time – which can vary widely depending on application – used in calculations. Do the math.) They were designed to solve other issues.

PRIs are a customer, voice service, bulk delivery mechanism. They are not used internally by telephone companies. One of the original concerns transporting call setup and other signaling information internally (with inband T1s) was speed, but there were many others, notably security. These issues were addressed internally with SS7. Within the telephone company SS7 supports T1s that are neither PRI nor inband T1s. There are various types of SS7 links, (very) loosely comparable to the PRI D channel that carry the signaling information related to the voice channels of these T1s. This was a very effective speed and security solution for the phone company. However, it is inefficient, financially, for a customer to buy SS7 links. A PRI places the voice channels and signaling information in a nice package that is designed to connect directly to the customer’s PBX.

The details supporting the above information are outside the encyclopedic scope appropriate for this article. Read the SS7 and PRI articles for some additional insight.

Bellhead (talk) 15:51, 21 April 2009 (UTC)[reply]

Yes, PRI was designed to be faster at call setup than T1. The critical metric that this addressed was post-dial delay: a key quality of service parameter for the phone network. I have done the math many times. When the service time for call setup is shorter, the Erlang C blocking is less. For T1 the trunk is siezed and held for the entire register-to-register signalling procedure, whereas for PRI the channel is not assigned until setup is almost complete. Studies at the phone company showed 5 to 15% improvement in maximum call handling due to conversion from T1 to PRI.
Speed was one design objective of many. I agree with the 5 to 15% improvement overall for the network. I still disagree that a single PRI can handle more calls than an inband T1 - at least in many (most?/almost all?) situations. One T1 versus PRI was the tone of the earlier version of the article, and hence my objection. Bellhead (talk) 15:57, 22 April 2009 (UTC)[reply]
On the contrary, PRI was used internally by the phone company. Dial access lines (DALs) being an example. PRI (with and without proprietary extensions) to service platforms being another. OTOH SS7 was not intended for customer access. ISDN is the UNI and SS7 ISUP is the NNI. Also, on the contrary, SS7 has very poor security as it was designed as a protocol internal to the telephone network where too much accent was (and is still) placed on physical security of outside and inside plant.
I should have focused on UNI/NNI vice "external/internal" Bellhead (talk) 15:57, 22 April 2009 (UTC)[reply]
I see that you removed CID as well. T1 supported CID with direct inward-outward dialing. The billing number was associated with the trunk and the PBX spilled CID instead of ANI (which was the network verified using screening). I will reinsert the CID in the text as the statement if they are configured by the carrier to do so was correct.
I take issue with the common usage of the term CallerID or any other phase used to refer to this specifically. Originally, the term referred to a service provided on POTS lines from the switch to the analog phone. Hence, it can not be transported on a T1 (except as part of the DS0 stream extending the analog connection - i.e. it is not signalling information on the T1). For many, the term "CallerID" has come to mean "the calling number being presented to the called party". This greatly confuses discussion about the original service. Bellhead (talk) 15:57, 22 April 2009 (UTC)[reply]
Also, inband signalling (of dialed digits) does not increase susceptibility to fraudulent activity. It was the older SF facilities that used a tone for line signalling rather than the A&B bit line signalling for T1 MF (R2) line signalling that increased susceptibility to fraudulent activity (blue box fraud). The CAS signalling bits and MF (or in many cases DP) tones are no less accessible than the D channel.
In general, the article as it stands is extremely poor. DS-1 was an electrical interface for inside plant that originally different (electrical charcteristics) from that of T1 for PCM repeater sections and transmission equipment (outside plant). With the advent and maturing of digital switching (T1 for transmission long pre-dates digital switching) the electrical interface specifications for T1 and DS-1 converged on the single characteristic used today (also called E11 in G.703). This article could be collapsed into several paragraphs with citations indicating this and moving all the T1 and PRI information to those articles. — Dgtsyb (talk) 21:11, 21 April 2009 (UTC)[reply]
I agree with you about the article overall. At least a comment is made early on about T1 versus DS1 originally being distinctly different even though they are now used interchangeably (and hence frequently "incorrectly"). Bellhead (talk) 15:57, 22 April 2009 (UTC)[reply]

Thanks for taking the time to respond. Your explanations are much more precise than mine as I was trying to generalize for a less technical audience. I agree with most of your input but have provided some specific comments above. Bellhead (talk) 15:57, 22 April 2009 (UTC)[reply]

no one cares about wimax[edit]

in the "alternatives" section there is a long commercial for wimax... that shoudl be deleted! —Preceding unsigned comment added by 160.39.129.187 (talk) 08:45, 31 July 2009 (UTC)[reply]

Difference between T1 and DS1[edit]

A T1 is a bipolar signal, which is at approx 772KHz This is the signal that will span distances, to the customer for example.

A DS1 is a unipolar signal, which is at approx 1.544 MHz, this signal suffers more attenuation and usually is internal between equipment. —Preceding unsigned comment added by 24.244.73.150 (talk) 05:21, 26 November 2009 (UTC)[reply]

The T designation stands for Terrestrial carrier. Earth bound, not satellite to ground.

since voice is a 4KHz roughly, and sampled at twice thanks to Nyquist the 8KHz is nearly 125 micro seconds. When trying to do TDM when T1 was invented the only method at the time was tubes. A tube can switch roughly at 5.2 microseconds - 125 / 5.2 = 24 voice channels. Europe waited for the transistor and thus the binary 2^5=32 —Preceding unsigned comment added by 24.244.73.150 (talk) 05:27, 26 November 2009 (UTC)[reply]

No, no, no, and no. All CXR was terrestrial at the time. The letter was assigned same as the earlier J, K, L, N, and S were assigned. I assume the intermediate letters went to systems not deployed. Tubes in other systems at the time were switching faster than transistors (nanoseconds) but T1 upon introduction was all transistor. Jim.henderson (talk) 02:03, 28 November 2009 (UTC)[reply]

There is a difference between when you used it-as a finished product-and when a T1 was invented. —Preceding unsigned comment added by 24.244.73.150 (talk) 23:41, 28 November 2009 (UTC)[reply]

Yes, experimental systems in Europe and elsewhere used a coding tube, but no, the Bell Labs T1 never at any time used tubes. Bit rates of all commercialized systems using PCM on balanced pairs were limited by attenuation; all used active devices that could have switched much faster had there been an inexpensive way to move the heavier bitstreams to the other end. European PCM used higher bitrates because their balanced pair repeater spacings were closer, which in turn was because the loading coil cases were closer. Each was tailored to the available transmission medium, not to the device technlogy. Jim.henderson (talk) 22:23, 30 November 2009 (UTC)[reply]

well... How did they come up with 24 ? This isn't a power of 2. The first prototype, as I was told, was designed by bell labs with tubes as they where more readily available at that time. As the transistor improved the choice to use transistors in production was no longer a reliability or a cost issue. I could be wrong, but I have yet to hear any other answer how the number 24 was devised other than how fast a tube could switch in the prototype. —Preceding unsigned comment added by 174.6.226.199 (talk) 01:35, 3 December 2009 (UTC)[reply]

The 24 channel T1 carrier was based on the 12-channel N1 carrier (FDM) and 24-channel N3 carrier systems it was intended to replace. Higher order FDM multiplex equipment necessitated multiples of 12 until it too was replaced by T3 carrier and higher order systems. See Frequency-division multiplexing and 12-channel carrier system. — Dgtsyb (talk) 14:01, 3 December 2009 (UTC)[reply]

Sounds like a typical case of "the Law of the handicap of a head start" to me. Mahjongg (talk) 00:32, 4 December 2009 (UTC)[reply]

And TV had been using a 45 MHz Intermediate frequency for twenty years, switching in ten nanoseconds, or twelve thousand times shorter than the alleged 125 microseconds limit. Someone has been offering uninformed speculation in the guise of historical fact. Jim.henderson (talk) 23:48, 5 December 2009 (UTC)[reply]

Technical[edit]

I tagged this article as too technical. The reason I did so is because I was looking for information of the internet connection. T1, naturally, goes to a disambig page, so I guessed that this was it. Unfortunately, the introduction to the article was so technical, that I still am not sure if it's the article I'm looking for. Rather than the intro focusing on how it works, it should focus on what it does (that is, provide broadband connections, or phone lines, or something). That would help immensely. D O N D E groovily Talk to me 00:31, 16 January 2012 (UTC)[reply]

Not Too Technical, Just A Bit Jumbled[edit]

I was a Member of Technical Staff at Bell Telephone Laboratories, Holmdel, NJ, and part of the T1 Transmission Systems Engineering Group from 1976 to 1984. There are a number of acronyms and definitions being confused with each other on the main Article page. I can imagine the potential for confusion. Will be willing to help a little here and there. [User:tballister]

Definitely too technical. The first sentence says it's "a T-carrier signaling scheme devised by Bell Labs" - nearly all of it technical language. As a technician, you know what T-carrier means, but the rest of us don't. The article needs to start non-technically (For example, "Digital Signal 1 is a system for transmitting data along traditional phone lines" or something like that. I don't know if that preceding statement is accurate, but that's the type of language it needs to use. Ego White Tray (talk) 02:01, 8 January 2013 (UTC)[reply]

Digital Signal 1 technicality[edit]

Hi. Your comments about the technicality of the DS-1 article have really challenged me. I've been studying the organization of this page and its related topics and would really like to help. I spent a number of years engineering T1 at Bell Telephone Laboritories, was published in the Bell System Technical Journal along with some of my colleauges, and I think you're right; this page needs help. There is a signficant amount of mis-representation as well as disorganization.

  • T-carrier - a group of systems that carry information on copper.
  • T1 - formally (within Bell Labs) "Transmission System 1", and just one of the T-carrier systems that were designed and deployed.
  • Digital Signal Heirarchy - A group of signal formats for the information sent on the copper, and were referred to as levels.
  • DS-1 - jut one of the Digital Signal Levels.

I've arrived now at several organizational changes that I think will be appropriate and helpful. But the biggest challenge is the one you've put forward, mainly to make this readable to the non-technical user. On one hand I'd argue that it isn't too technical because this page is organized within the "Information Technology Portal", and by defintion technical content generally targets a technical audience. Also, I'd argue that it isn't necessarily the goal of any particular technical discussion to elaborate on the full spectrum of technological premises and foundational concepts required for full understanding. I'd think that if every article did that we'd all be re-writting the same background content over and over again. I mean, I wouldn't think it efficient for everyone who wanted to explain an algebraic operation to always start out with the rules of addition, subtraction, multiplication and division. Right?

On the other hand, as a technical writer for many years, I absolutely recognize numerous opportunites for setting the foundation in this article, generously sprinkled with backward references to other articles that explain the foundational concepts. As as few examples, why does the Digital Signal 1 even exist? Where does it fit within the family of systems that make up the Digital Signal Heirarchy? Why does the heirarchy itslef exist? Who invented it? What problem did it solve?

Guess what I'm saying is that I agree a basic stage has to be set, and this article doesn't do that well, nor do most of the related topics. So I'd like to try and help. But while my opinion is that the best way to do that is indeed, as you suggest, begin within non-technincal discussion, I feel that technical articles must by definition eventually get to the technical points. That means it must at some point begin to rely on technical terminology. But the best articles I've ever read myself are ones that start out with the fundamental concepts in as simple language as possible, and then naturally progress in terms of technological depth, layer by layer. I.e., an approach that insures every reader understands the concepts from the beginning, but then leaves it up to the individual reader's background to let them decide how deeply they want to go.

My concern is that you won't agree with that. Would you let me know your thoughts? I would like to share my organizational recommendations and see if you concur. t Tballister (talk) 19:14, 20 January 2013 (UTC)[reply]

Well, the way to start non-technical is to start with its purpose. What is it for, in terms of everyday tasks, like internet or multiple phone lines. Why might it be better or worse than similar technologies, such as speed and reliability. I do agree with your general approach and the wikipedia manual of style says the same - start with basic non-technical information and gradually increase complexity as your go. Special relativity is a good example that explains what it is from a non-technical standpoint and then adds complexity. Ego White Tray (talk) 04:50, 22 January 2013 (UTC)[reply]