Карта сайта

Это автоматически сохраненная страница от 20.02.2013. Оригинал был здесь: http://2ch.hk/b/res/43738676.html
Сайт a2ch.ru не связан с авторами и содержимым страницы
жалоба / abuse: admin@a2ch.ru

Срд 20 Фев 2013 04:51:12
Таки лишила своего няшу анальной девственности :3

Запилю кулстори. Давно хотела это сделать, прям текла от одной мысли о том, что я его страпоню. Причем не как-то грубо и больно, а именно нежно, чтобы он получал кайф вместе со мной.

На эту тему мы с ним особо не говорили, пару раз перешучивались. Я вообще планировала делать все как по интернетовским мануалам. Мол сначала пальцем массаж ануса при минете, потом проникновение и так через овер 9000 лет страпон.

В итоге в один день просто не выдержала и скипнула все эти пункты :3 Страпон, кстати, купила в интернет-магазине тоже в момент сексуального помутнения без его ведома. Пару раз безумно шликала, надевая его. Как заведенная. Просто от одного факта, что он на меня надет.

Вдохновлялась я сценой из пикрелейтед (2 сцена тут http://www.xvideos.com/video1906679/femdom_-_anal_male_-_strapon_fm_-_strapon_chicks_-_bellas_bitches_2002_dvd_rip_divx5_femdom_slave ).
Вечером отправила куна в душ, сама приготовила и попрятала смазку рядом с кроватью. Когда он вернулся, я прошмыгнула сама в ванную. Там помылась и переоделась.

Топик, оче короткая юбочка, невинные чулки с подвязками и заплела косички. И надела страпон :3

Выхожу к нему, а он не видит страпон под юбочкой. Лежит, улыбается и смотрит на меня. Я как бы невзначай начала покачивать бедрами так, чтобы он обратил внимание. Уронил челюсть :3

Я сказала, что приготовила ему сюрприз и сделала такое невинно-просящее лицо. В этот момент подумала, что он меня из квартиры выставит на мороз. В итоге он нормально отреагировал слава богу.

В итоге я повернула его на живот и положила под пах довольно большую подушку, чтобы он поднялся. Начала с массажа. Шею, плечи, спинку. Массировала, покусывала и проводила грудью. Спускалась все ниже и ниже. Он так мило и довольно сопел :3

Немного расслабив его, я достала смазку, выдавила достаточно много и погрела в руках. Начала смазывать его между ягодицами. Причем не сразу анус, а сверху вниз, проводя пальчиками. Не скупилась на смазку, что оказалась ну просто очень правильным. В итоге засунула смазанный пальчик внутрь. Через некоторое время осторожно два.

Только когда мне показалось, что смазки достаточно (реально много :3), я смазала сам страпон и нависла над ним. Таким же движением бедрами и проводила страпоном сверху вниз. Видно, что ему было удивительно и его заводило.

В какой-то момент он начал сам двигать тазом навстречу ко мне. Тогда я осторожно взяла страпон рукой и аккуратно, медленно и как можно нежнее проникла внутрь. Делала осторожные движения-фрикции туда-сюда, пока не удостоверилась, что страпон отлично скользит и ходит как надо. И начала нормальные движения.

Через какое-то время он пыхтя попросил увеличить скорость. Было видно, что ему реально доставляет. Весь вспотел :3 К сожалению, через какое-то время силы меня покинули совсем (как вы вообще нас трахаете без перерыва?).

Я вышла из него, перевернула на спину и не спрашивая довела до оргазма ротиком. Без рук он не кончил, хотя я втайне дико надеялась. Будем тренироваться.

В итоге я вам скажу о своих ощущениях. Это просто какой-то феерический пиздец в хорошем смысле. Я во время действия просто ревела от счастья. Натурально слезы капали (слава богу он не видел). Казался таким совсем родным и любимым. Сносило голову от доверия. Что я с ним такое делаю.

Пока писала 2 раза прерывалась на шлик :3 Надеюсь не очень сумбурно.


Срд 20 Фев 2013 04:53:45
>>43738676
и покрылся тред зеленоватым налётом обыденной толстоты...

Срд 20 Фев 2013 04:54:41
>>43738717
А вот и нет, все правда.

Срд 20 Фев 2013 04:55:16
Смерть. Шлюха. Быдло. Призываю сажа-куна.

Срд 20 Фев 2013 04:56:11
>>43738676
поздравляю. теперь твой парень - петух.

Срд 20 Фев 2013 04:56:34
>>43738751
Просто поделилась :)

Срд 20 Фев 2013 04:56:34
>>43738737
дорогуша, не мне тебе напоминать кто ты без пруфов
да, ночной, и тем не менее

Срд 20 Фев 2013 04:57:36
>>43738774
Лень вставать.

Срд 20 Фев 2013 04:57:54
>>43738772
Поделись соображениями на тему личности, человека, восприятия мира и мира как такового. Что это?

Срд 20 Фев 2013 04:58:14
Могу посагать руками, если кто-нить присоединится, одному скучно, няши

Срд 20 Фев 2013 04:58:30
>>43738791
чему там вставать то, тралл?
всё ясно с тобой

Срд 20 Фев 2013 04:59:38
>>43738805
Давай лучше поговорим про квантовый мир.

Срд 20 Фев 2013 04:59:53
>>43738797
Твоя личность банальна и неинтересна. Сагатель на ночном, наверняка жырный неудачник без работы и образования.
Как тебе такие соображения?

Срд 20 Фев 2013 05:02:40
Если ты тян - я бы тебя отымел бы на славу ночному, но видит бог - надо идти спать, так как сегодня утром рано вставать и нужно будет подрочить, иначе голову посещают мысли о тянках.
Это всё пройдёт.

>>43738811

Срд 20 Фев 2013 05:03:23
>>43738676
на что ты рассчитывал?

Срд 20 Фев 2013 05:03:28
>>43738676
А клизму ты ему делала или так глину месила?

Срд 20 Фев 2013 05:04:01
>>43738827
Давай, что ты почувствовал, когда узнал про излучение Хоккинга? Для меня это открыло.. Ну сложно сказать что, наверное я глубже понял тот факт, что вселенная бесконечна в своем танце рождения и смерти. Пишу с телефона, так что медленно и без простыней, раз разговариваем.

Срд 20 Фев 2013 05:04:12
Ну вот, у меня встал. теперь чувствую себя пидором.

Срд 20 Фев 2013 05:05:01
>>43738791
А портфель собран?

Срд 20 Фев 2013 05:05:34
>>43738830
Как ты прав. Жирный неудачник без работы и образования.
Что такое сознание? Недавно тред был, навеяло.
К каким теориям возникновения этого явления склоняешься ты?
Ну и пожалуй. Свобода воли. Есть ли она? До какого уровня процессов и мыслей она возможна?

Срд 20 Фев 2013 05:06:14
ТРАЛЛ дохуя ? Ни одного интересного треда.

Срд 20 Фев 2013 05:07:08
кун просто хотел угодить, удовольствия точно не было, хотя бы потому, что куны вообще анально не могут чувствовать наслаждение, даже если ты напрямую страпоном через жопу простату помассируешь, слишком большой объект.

пидоры кстати при ебле в жопу тоже особо сексуального удовольствия не получают, это скорее ментальное эмпатия активу.

Срд 20 Фев 2013 05:08:14
>>43738676

спасибо за интересную историю!

Срд 20 Фев 2013 05:09:13
>>43738676
Гнилозубка, ты?

Срд 20 Фев 2013 05:10:01
>>43738954
>пидоры кстати при ебле в жопу тоже особо сексуального удовольствия не получаю
Откуда такая информация? Проткнутый что ли?

Срд 20 Фев 2013 05:10:17
>>43738900
Это определенным образом поменяло представление о мире и об энтропии. Конечно, всегда знал что черные дыры излучают, как только узнал о них. Они не могу не излучать чего-либо, просто потому что это ведет к быстрой смерти вселенной от этой энтропии.
Странное чувство, вроде бы ничего нового и не узнал, а только лишь убедился что конца никогда не будет.
Интересно, когда все звезды потухнут, субчеловек сможет создавать материю и миры?

Срд 20 Фев 2013 05:11:00
>>43738954
ммм, просто тебя как-то наклонили за углом и теперь ты, как уже на сковородке, пытаешься всячески вывести какую-то неправильную и испорченную истину.
анальная стимуляция
стимуляция задницы
писечка
скрыл
писечка
открыл

Срд 20 Фев 2013 05:11:26
>>43738954
Привет, диванный. На основе моих данных данных при возбуждении простаты можно кончать. Уебывай, школьник.

Срд 20 Фев 2013 05:11:38
>>43738676
Пошла нахуй, шлюха.

Срд 20 Фев 2013 05:12:10
>>43738996
это очевидно. во первых простата не влияет напрямую на половой член, и заставить кончить можно лишь целенаправленным массированием, и явно не страпоном, а скорее пальцем.

это все равно что в подмышку трахать, там нет нервных окончаний как на члене.

Срд 20 Фев 2013 05:14:57
ОП, мне похуй, жирный ты или правда няшила куна, но у меня встал, спасибо и на том :3

Срд 20 Фев 2013 05:15:14
>>43739000
Вселенная не потухнет, энергия то никуда не денется. Другой вопрос что она будет бесконечно далеко.. кто знает, может быть человек поймет, что такое большой взрыв и сможет его воссоздать. Хотя скорее он сделает айфон 10 и умрет в ядерном аду

Срд 20 Фев 2013 05:15:25
за сколько денег в дс2 выебут страпоном?

Срд 20 Фев 2013 05:15:49
XR linux/include/asm-generic/atomic-long.h

Search Prefs

Code search: atomic_long_sub_and_test
Function
include/asm-generic/atomic-long.h, line 69 [usage...]
include/asm-generic/atomic-long.h, line 186 [usage...]
1#ifndef _ASM_GENERIC_ATOMIC_LONG_H
2#define _ASM_GENERIC_ATOMIC_LONG_H
3/*
4 * Copyright (C) 2005 Silicon Graphics, Inc.
5 * Christoph Lameter
6 *
7 * Allows to provide arch independent atomic definitions without the need to
8 * edit all arch specific atomic.h files.
9 */
10
11#include >>43738954
>куны вообще анально не могут чувствовать наслаждение

а этот парень на гифке от святого духа кончает? ну ты и мудак. сначала поебись в жопу а потом рассказывай.

Срд 20 Фев 2013 05:17:44
>>43739082
охуеть, проиграл с этого вайпера

Срд 20 Фев 2013 05:18:36
LXR linux/include/asm-generic/mutex-dec.h

Search Prefs

Code search: atomic_long_sub_and_test
Function
include/asm-generic/atomic-long.h, line 69 [usage...]
include/asm-generic/atomic-long.h, line 186 [usage...]
1/*
2 * include/asm-generic/mutex-dec.h
3 *
4 * Generic implementation of the mutex fastpath, based on atomic
5 * decrement/increment.
6 */
7#ifndef _ASM_GENERIC_MUTEX_DEC_H
8#define _ASM_GENERIC_MUTEX_DEC_H
9
10/**
11 * __mutex_fastpath_lock - try to take the lock by moving the count
12 * from 1 to a 0 value
13 * @count: pointer of type atomic_t
14 * @fail_fn: function to call if the original value was not 1
15 *
16 * Change the count from 1 to a value lower than 1, and call <fail_fn> if
17 * it wasn&amp;#39;t 1 originally. This function MUST leave the value lower than
18 * 1 even when the "1" assertion wasn&amp;#39;t true.
19 */
20static inline void
21__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
22{
23 if (unlikely(atomic_dec_return(count) < 0))
24 fail_fn(count);
25}
26
27/**
28 * __mutex_fastpath_lock_retval - try to take the lock by moving the count
29 * from 1 to a 0 value
30 * @count: pointer of type atomic_t
31 * @fail_fn: function to call if the original value was not 1
32 *
33 * Change the count from 1 to a value lower than 1, and call <fail_fn> if
34 * it wasn&amp;#39;t 1 originally. This function returns 0 if the fastpath succeeds,
35 * or anything the slow path function returns.
36 */
37static inline int
38__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
39{
40 if (unlikely(atomic_dec_return(count) < 0))
41 return fail_fn(count);
42 return 0;
43}
44
45/**
46 * __mutex_fastpath_unlock - try to promote the count from 0 to 1
47 * @count: pointer of type atomic_t
48 * @fail_fn: function to call if the original value was not 0
49 *
50 * Try to promote the count from 0 to 1. If it wasn&amp;#39;t 0, call <fail_fn>.
51 * In the failure case, this function is allowed to either set the value to
52 * 1, or to set it to a value lower than 1.
53 *
54 * If the implementation sets it to a value of lower than 1, then the
55 * __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs
56 * to return 0 otherwise.
57 */
58static inline void
59__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
60{
61 if (unlikely(atomic_inc_return(count) <= 0))
62 fail_fn(count);
63}
64
65#define __mutex_slowpath_needs_to_unlock() 1
66
67/**
68 * __mutex_fastpath_trylock - try to acquire the mutex, without waiting
69 *
70 * @count: pointer of type atomic_t
71 * @fail_fn: fallback function
72 *
73 * Change the count from 1 to a value lower than 1, and return 0 (failure)
74 * if it wasn&amp;#39;t 1 originally, or return 1 (success) otherwise. This function
75 * MUST leave the value lower than 1 even when the "1" assertion wasn&amp;#39;t true.
76 * Additionally, if the value was < 0 originally, this function must not leave
77 * it to 0 on failure.
78 *
79 * If the architecture has no effective trylock variant, it should call the
80 * <fail_fn> spinlock-based trylock variant unconditionally.
81 */
82static inline int
83__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
84{
85 if (likely(atomic_cmpxchg(count, 1, 0) == 1))
86 return 1;
87 return 0;
88}
89
90#endif
91

Срд 20 Фев 2013 05:18:38
ОП хуй и жирный тралл. Давать себя страпонить - быть омегой.

Срд 20 Фев 2013 05:18:44
>>43739106
уджваиваю этого

любитель-принять-под-хвост

Срд 20 Фев 2013 05:19:23
>>43739127
Вайпер, ты - няша, твоя мать - милаша. Таким еще не вайпали.

Срд 20 Фев 2013 05:19:34
У вайпера знатно бомбит пердак!

Срд 20 Фев 2013 05:20:01
LXR linux/include/asm-generic/rtc.h

Search Prefs

Code search: atomic_long_sub_and_test
Function
include/asm-generic/atomic-long.h, line 69 [usage...]
include/asm-generic/atomic-long.h, line 186 [usage...]
1/*
2 * include/asm-generic/rtc.h
3 *
4 * Author: Tom Rini <trini@mvista.com>
5 *
6 * Based on:
7 * drivers/char/rtc.c
8 *
9 * Please read the COPYING file for all license details.
10 */
11
12#ifndef ASM_RTC_H
13#define ASM_RTC_H
14
15#include <linux/mc146818rtc.h>
16#include <linux/rtc.h>
17#include <linux/bcd.h>
18#include <linux/delay.h>
19
20#define RTC_PIE 0x40 /* periodic interrupt enable */
21#define RTC_AIE 0x20 /* alarm interrupt enable */
22#define RTC_UIE 0x10 /* update-finished interrupt enable */
23
24/* some dummy definitions */
25#define RTC_BATT_BAD 0x100 /* battery bad */
26#define RTC_SQWE 0x08 /* enable square-wave output */
27#define RTC_DM_BINARY 0x04 /* all time/date values are BCD if clear */
28#define RTC_24H 0x02 /* 24 hour mode - else hours bit 7 means pm */
29#define RTC_DST_EN 0x01 /* auto switch DST - works f. USA only */
30
31/*
32 * Returns true if a clock update is in progress
33 */
34static inline unsigned char rtc_is_updating(void)
35{
36 unsigned char uip;
37 unsigned long flags;
38
39 spin_lock_irqsave(&amp;rtc_lock, flags);
40 uip = (CMOS_READ(RTC_FREQ_SELECT) &amp; RTC_UIP);
41 spin_unlock_irqrestore(&amp;rtc_lock, flags);
42 return uip;
43}
44
45static inline unsigned int __get_rtc_time(struct rtc_time *time)
46{
47 unsigned char ctrl;
48 unsigned long flags;
49
50#ifdef CONFIG_MACH_DECSTATION
51 unsigned int real_year;
52#endif
53
54 /*
55 * read RTC once any update in progress is done. The update
56 * can take just over 2ms. We wait 20ms. There is no need to
57 * to poll-wait (up to 1s - eeccch) for the falling edge of RTC_UIP.
58 * If you need to know exactly when a second has started, enable
59 * periodic update complete interrupts, (via ioctl) and then
60 * immediately read /dev/rtc which will block until you get the IRQ.
61 * Once the read clears, read the RTC time (again via ioctl). Easy.
62 */
63 if (rtc_is_updating())
64 mdelay(20);
65
66 /*
67 * Only the values that we read from the RTC are set. We leave
68 * tm_wday, tm_yday and tm_isdst untouched. Even though the
69 * RTC has RTC_DAY_OF_WEEK, we ignore it, as it is only updated
70 * by the RTC when initially set to a non-zero value.
71 */
72 spin_lock_irqsave(&amp;rtc_lock, flags);
73 time->tm_sec = CMOS_READ(RTC_SECONDS);
74 time->tm_min = CMOS_READ(RTC_MINUTES);
75 time->tm_hour = CMOS_READ(RTC_HOURS);
76 time->tm_mday = CMOS_READ(RTC_DAY_OF_MONTH);
77 time->tm_mon = CMOS_READ(RTC_MONTH);
78 time->tm_year = CMOS_READ(RTC_YEAR);
79#ifdef CONFIG_MACH_DECSTATION
80 real_year = CMOS_READ(RTC_DEC_YEAR);
81#endif
82 ctrl = CMOS_READ(RTC_CONTROL);
83 spin_unlock_irqrestore(&amp;rtc_lock, flags);
84
85 if (!(ctrl &amp; RTC_DM_BINARY) RTC_ALWAYS_BCD)
86 {
87 time->tm_sec = bcd2bin(time->tm_sec);
88 time->tm_min = bcd2bin(time->tm_min);
89 time->tm_hour = bcd2bin(time->tm_hour);
90 time->tm_mday = bcd2bin(time->tm_mday);
91 time->tm_mon = bcd2bin(time->tm_mon);
92 time->tm_year = bcd2bin(time->tm_year);
93 }
94
95#ifdef CONFIG_MACH_DECSTATION
96 time->tm_year += real_year - 72;
97#endif
98
99 /*
100 * Account for differences between how the RTC uses the values
101 * and how they are defined in a struct rtc_time;
102 */
103 if (time->tm_year <= 69)
104 time->tm_year += 100;
105
106 time->tm_mon--;
107
108 return RTC_24H;
109}
110
111#ifndef get_rtc_time
112#define get_rtc_time __get_rtc_time
113#endif
114
115/* Set the current date and time in the real time clock. */
116static inline int __set_rtc_time(struct rtc_time *time)
117{
118 unsigned long flags;
119 unsigned char mon, day, hrs, min, sec;
120 unsigned char save_control, save_freq_select;
121 unsigned int yrs;
122#ifdef CONFIG_MACH_DECSTATION
123 unsigned int real_yrs, leap_yr;
124#endif
125
126 yrs = time->tm_year;
127 mon = time->tm_mon + 1; /* tm_mon starts at zero */
128 day = time->tm_mday;
129 hrs = time->tm_hour;
130 min = time->tm_min;
131 sec = time->tm_sec;
132
133 if (yrs > 255) /* They are unsigned */
134 return -EINVAL;
135
136 spin_lock_irqsave(&amp;rtc_lock, flags);
137#ifdef CONFIG_MACH_DECSTATION
138 real_yrs = yrs;
139 leap_yr = ((!((yrs + 1900) % 4) &amp;&amp; ((yrs + 1900) % 100))
140 !((yrs + 1900) % 400));
141 yrs = 72;
142
143 /*
144 * We want to keep the year set to 73 until March
145 * for non-leap years, so that Feb, 29th is handled
146 * correctly.
147 */
148 if (!leap_yr &amp;&amp; mon < 3) {
149 real_yrs--;
150 yrs = 73;
151 }
152#endif
153 /* These limits and adjustments are independent of
154 * whether the chip is in binary mode or not.
155 */
156 if (yrs > 169) {
157 spin_unlock_irqrestore(&amp;rtc_lock, flags);
158 return -EINVAL;
159 }
160
161 if (yrs >= 100)
162 yrs -= 100;
163
164 if (!(CMOS_READ(RTC_CONTROL) &amp; RTC_DM_BINARY)
165 RTC_ALWAYS_BCD) {
166 sec = bin2bcd(sec);
167 min = bin2bcd(min);
168 hrs = bin2bcd(hrs);
169 day = bin2bcd(day);
170 mon = bin2bcd(mon);
171 yrs = bin2bcd(yrs);
172 }
173
174 save_control = CMOS_READ(RTC_CONTROL);
175 CMOS_WRITE((save_control RTC_SET), RTC_CONTROL);
176 save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
177 CMOS_WRITE((save_freq_select RTC_DIV_RESET2), RTC_FREQ_SELECT);
178
179#ifdef CONFIG_MACH_DECSTATION
180 CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
181#endif
182 CMOS_WRITE(yrs, RTC_YEAR);
183 CMOS_WRITE(mon, RTC_MONTH);
184 CMOS_WRITE(day, RTC_DAY_OF_MONTH);
185 CMOS_WRITE(hrs, RTC_HOURS);
186 CMOS_WRITE(min, RTC_MINUTES);
187 CMOS_WRITE(sec, RTC_SECONDS);
188
189 CMOS_WRITE(save_control, RTC_CONTROL);
190 CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
191
192 spin_unlock_irqrestore(&amp;rtc_lock, flags);
193
194 return 0;
195}
196
197#ifndef set_rtc_time
198#define set_rtc_time __set_rtc_time
199#endif
200
201static inline unsigned int get_rtc_ss(void)
202{
203 struct rtc_time h;
204
205 get_rtc_time(&amp;h);
206 return h.tm_sec;
207}
208
209static inline int get_rtc_pll(struct rtc_pll_info *pll)
210{
211 return -EINVAL;
212}
213static inline int set_rtc_pll(struct rtc_pll_info *pll)
214{
215 return -EINVAL;
216}
217
218#endif /* ASM_RTC_H */
219

Срд 20 Фев 2013 05:20:15
>>43739106
или этот.

один из самых сильный оргазмов которые я испытывал, это с массажем простаты и фапом. райское наслаждение. тян такого никогда не сможет дать.

кстати куны так же лучше тянок минет делают.

Срд 20 Фев 2013 05:20:41
>>43739030
>там нет нервных окончаний
Ну ты и дятел.

Срд 20 Фев 2013 05:21:32
1/*
2 * Access to user system call parameters and results
3 *
4 * Copyright (C) 2008-2009 Red Hat, Inc. All rights reserved.
5 *
6 * This copyrighted material is made available to anyone wishing to use,
7 * modify, copy, or redistribute it subject to the terms and conditions
8 * of the GNU General Public License v.2.
9 *
10 * This file is a stub providing documentation for what functions
11 * asm-ARCH/syscall.h files need to define. Most arch definitions
12 * will be simple inlines.
13 *
14 * All of these functions expect to be called with no locks,
15 * and only when the caller is sure that the task of interest
16 * cannot return to user mode while we are looking at it.
17 */
18
19#ifndef _ASM_SYSCALL_H
20#define _ASM_SYSCALL_H 1
21
22struct task_struct;
23struct pt_regs;
24
25/**
26 * syscall_get_nr - find what system call a task is executing
27 * @task: task of interest, must be blocked
28 * @regs: task_pt_regs() of @task
29 *
30 * If @task is executing a system call or is at system call
31 * tracing about to attempt one, returns the system call number.
32 * If @task is not executing a system call, i.e. it&amp;#39;s blocked
33 * inside the kernel for a fault or signal, returns -1.
34 *
35 * Note this returns int even on 64-bit machines. Only 32 bits of
36 * system call number can be meaningful. If the actual arch value
37 * is 64 bits, this truncates to 32 bits so 0xffffffff means -1.
38 *
39 * It&amp;#39;s only valid to call this when @task is known to be blocked.
40 */
41int syscall_get_nr(struct task_struct *task, struct pt_regs *regs);
42
43/**
44 * syscall_rollback - roll back registers after an aborted system call
45 * @task: task of interest, must be in system call exit tracing
46 * @regs: task_pt_regs() of @task
47 *
48 * It&amp;#39;s only valid to call this when @task is stopped for system
49 * call exit tracing (due to TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT),
50 * after tracehook_report_syscall_entry() returned nonzero to prevent
51 * the system call from taking place.
52 *
53 * This rolls back the register state in @regs so it&amp;#39;s as if the
54 * system call instruction was a no-op. The registers containing
55 * the system call number and arguments are as they were before the
56 * system call instruction. This may not be the same as what the
57 * register state looked like at system call entry tracing.
58 */
59void syscall_rollback(struct task_struct *task, struct pt_regs *regs);
60
61/**
62 * syscall_get_error - check result of traced system call
63 * @task: task of interest, must be blocked
64 * @regs: task_pt_regs() of @task
65 *
66 * Returns 0 if the system call succeeded, or -ERRORCODE if it failed.
67 *
68 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
69 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
70 */
71long syscall_get_error(struct task_struct *task, struct pt_regs *regs);
72
73/**
74 * syscall_get_return_value - get the return value of a traced system call
75 * @task: task of interest, must be blocked
76 * @regs: task_pt_regs() of @task
77 *
78 * Returns the return value of the successful system call.
79 * This value is meaningless if syscall_get_error() returned nonzero.
80 *
81 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
82 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
83 */
84long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs);
85
86/**
87 * syscall_set_return_value - change the return value of a traced system call
88 * @task: task of interest, must be blocked
89 * @regs: task_pt_regs() of @task
90 * @error: negative error code, or zero to indicate success
91 * @val: user return value if @error is zero
92 *
93 * This changes the results of the system call that user mode will see.
94 * If @error is zero, the user sees a successful system call with a
95 * return value of @val. If @error is nonzero, it&amp;#39;s a negated errno
96 * code; the user sees a failed system call with this errno code.
97 *
98 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
99 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
100 */
101void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs,
102 int error, long val);
103
104/**
105 * syscall_get_arguments - extract system call parameter values
106 * @task: task of interest, must be blocked
107 * @regs: task_pt_regs() of @task
108 * @i: argument index [0,5]
109 * @n: number of arguments; n+i must be [1,6].
110 * @args: array filled with argument values
111 *
112 * Fetches @n arguments to the system call starting with the @i&amp;#39;th argument
113 * (from 0 through 5). Argument @i is stored in @args[0], and so on.
114 * An arch inline version is probably optimal when @i and @n are constants.
115 *
116 * It&amp;#39;s only valid to call this when @task is stopped for tracing on
117 * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
118 * It&amp;#39;s invalid to call this with @i + @n > 6; we only support system calls
119 * taking up to 6 arguments.
120 */
121void syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
122 unsigned int i, unsigned int n, unsigned long *args);
123
124/**
125 * syscall_set_arguments - change system call parameter value
126 * @task: task of interest, must be in system call entry tracing
127 * @regs: task_pt_regs() of @task
128 * @i: argument index [0,5]
129 * @n: number of arguments; n+i must be [1,6].
130 * @args: array of argument values to store
131 *
132 * Changes @n arguments to the system call starting with the @i&amp;#39;th argument.
133 * Argument @i gets value @args[0], and so on.
134 * An arch inline version is probably optimal when @i and @n are constants.
135 *
136 * It&amp;#39;s only valid to call this when @task is stopped for tracing on
137 * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
138 * It&amp;#39;s invalid to call this with @i + @n > 6; we only support system calls
139 * taking up to 6 arguments.
140 */
141void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
142 unsigned int i, unsigned int n,
143 const unsigned long *args);
144
145/**
146 * syscall_get_arch - return the AUDIT_ARCH for the current system call
147 * @task: task of interest, must be in system call entry tracing
148 * @regs: task_pt_regs() of @task
149 *
150 * Returns the AUDIT_ARCH_* based on the system call convention in use.
151 *
152 * It&amp;#39;s only valid to call this when @task is stopped on entry to a system
153 * call, due to %TIF_SYSCALL_TRACE, %TIF_SYSCALL_AUDIT, or %TIF_SECCOMP.
154 *
155 * Architectures which permit CONFIG_HAVE_ARCH_SECCOMP_FILTER must
156 * provide an implementation of this.
157 */
158int syscall_get_arch(struct task_struct *task, struct pt_regs *regs);
159#endif /* _ASM_SYSCALL_H */
160

Срд 20 Фев 2013 05:21:57
>>43739072
Почти что затухание. Не думаю что получится создать еще один большой взрыв. Но если подумать, человек сможет путешествовать в другие миры. Это определенно спасет его, но это предполагает что и в этом мире есть кто-то из будущего другого мира.
Как ты себе представляешь столь далекое будущее?
И нет, человек не умрет от 10 айфона, это просто нелогично. Ведь на земле существуют не только шлюхи и двачеры. Или харкачеры.

Срд 20 Фев 2013 05:23:15
>>43739147
Даже прочтение этих постов вызывает покраснение глаз и высыпания на коже.

Срд 20 Фев 2013 05:23:24
XR linux/include/asm-generic/xor.h

Search Prefs
1/*
2 * include/asm-generic/xor.h
3 *
4 * Generic optimized RAID-5 checksumming functions.
5 *
6 * This program is free software; you can redistribute it and/or modify
7 * it under the terms of the GNU General Public License as published by
8 * the Free Software Foundation; either version 2, or (at your option)
9 * any later version.
10 *
11 * You should have received a copy of the GNU General Public License
12 * (for example /usr/src/linux/COPYING); if not, write to the Free
13 * Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
14 */
15
16#include <linux/prefetch.h>
17
18static void
19xor_8regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
20{
21 long lines = bytes / (sizeof (long)) / 8;
22
23 do {
24 p1[0] ^= p2[0];
25 p1[1] ^= p2[1];
26 p1[2] ^= p2[2];
27 p1[3] ^= p2[3];
28 p1[4] ^= p2[4];
29 p1[5] ^= p2[5];
30 p1[6] ^= p2[6];
31 p1[7] ^= p2[7];
32 p1 += 8;
33 p2 += 8;
34 } while (--lines > 0);
35}
36
37static void
38xor_8regs_3(unsigned long bytes, unsigned long *p1, unsigned long *p2,
39 unsigned long *p3)
40{
41 long lines = bytes / (sizeof (long)) / 8;
42
43 do {
44 p1[0] ^= p2[0] ^ p3[0];
45 p1[1] ^= p2[1] ^ p3[1];
46 p1[2] ^= p2[2] ^ p3[2];
47 p1[3] ^= p2[3] ^ p3[3];
48 p1[4] ^= p2[4] ^ p3[4];
49 p1[5] ^= p2[5] ^ p3[5];
50 p1[6] ^= p2[6] ^ p3[6];
51 p1[7] ^= p2[7] ^ p3[7];
52 p1 += 8;
53 p2 += 8;
54 p3 += 8;
55 } while (--lines > 0);
56}
57
58static void
59xor_8regs_4(unsigned long bytes, unsigned long *p1, unsigned long *p2,
60 unsigned long *p3, unsigned long *p4)
61{
62 long lines = bytes / (sizeof (long)) / 8;
63
64 do {
65 p1[0] ^= p2[0] ^ p3[0] ^ p4[0];
66 p1[1] ^= p2[1] ^ p3[1] ^ p4[1];
67 p1[2] ^= p2[2] ^ p3[2] ^ p4[2];
68 p1[3] ^= p2[3] ^ p3[3] ^ p4[3];
69 p1[4] ^= p2[4] ^ p3[4] ^ p4[4];
70 p1[5] ^= p2[5] ^ p3[5] ^ p4[5];
71 p1[6] ^= p2[6] ^ p3[6] ^ p4[6];
72 p1[7] ^= p2[7] ^ p3[7] ^ p4[7];
73 p1 += 8;
74 p2 += 8;
75 p3 += 8;
76 p4 += 8;
77 } while (--lines > 0);
78}
79
80static void
81xor_8regs_5(unsigned long bytes, unsigned long *p1, unsigned long *p2,
82 unsigned long *p3, unsigned long *p4, unsigned long *p5)
83{
84 long lines = bytes / (sizeof (long)) / 8;
85
86 do {
87 p1[0] ^= p2[0] ^ p3[0] ^ p4[0] ^ p5[0];
88 p1[1] ^= p2[1] ^ p3[1] ^ p4[1] ^ p5[1];
89 p1[2] ^= p2[2] ^ p3[2] ^ p4[2] ^ p5[2];
90 p1[3] ^= p2[3] ^ p3[3] ^ p4[3] ^ p5[3];
91 p1[4] ^= p2[4] ^ p3[4] ^ p4[4] ^ p5[4];
92 p1[5] ^= p2[5] ^ p3[5] ^ p4[5] ^ p5[5];
93 p1[6] ^= p2[6] ^ p3[6] ^ p4[6] ^ p5[6];
94 p1[7] ^= p2[7] ^ p3[7] ^ p4[7] ^ p5[7];
95 p1 += 8;
96 p2 += 8;
97 p3 += 8;
98 p4 += 8;
99 p5 += 8;
100 } while (--lines > 0);
101}
102
103static void
104xor_32regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
105{
106 long lines = bytes / (sizeof (long)) / 8;
107
108 do {
109 register long d0, d1, d2, d3, d4, d5, d6, d7;
110 d0 = p1[0]; /* Pull the stuff into registers */
111 d1 = p1[1]; /* ... in bursts, if possible. */
112 d2 = p1[2];
113 d3 = p1[3];
114 d4 = p1[4];
115 d5 = p1[5];
116 d6 = p1[6];
117 d7 = p1[7];
118 d0 ^= p2[0];
119 d1 ^= p2[1];
120 d2 ^= p2[2];
121 d3 ^= p2[3];
122 d4 ^= p2[4];
123 d5 ^= p2[5];
124 d6 ^= p2[6];
125 d7 ^= p2[7];
126 p1[0] = d0; /* Store the result (in bursts) */
127 p1[1] = d1;
128 p1[2] = d2;
129 p1[3] = d3;
130 p1[4] = d4;
131 p1[5] = d5;
132 p1[6] = d6;
133 p1[7] = d7;
134 p1 += 8;
135 p2 += 8;
136 } while (--lines > 0);
137}
138
139static void
140xor_32regs_3(unsigned long bytes, unsigned long *p1, unsigned long *p2,
141 unsigned long *p3)
142{
143 long lines = bytes / (sizeof (long)) / 8;
144
145 do {
146 register long d0, d1, d2, d3, d4, d5, d6, d7;
147 d0 = p1[0]; /* Pull the stuff into registers */
148 d1 = p1[1]; /* ... in bursts, if possible. */
149 d2 = p1[2];
150 d3 = p1[3];
151 d4 = p1[4];
152 d5 = p1[5];
153 d6 = p1[6];
154 d7 = p1[7];
155 d0 ^= p2[0];
156 d1 ^= p2[1];
157 d2 ^= p2[2];
158 d3 ^= p2[3];
159 d4 ^= p2[4];
160 d5 ^= p2[5];
161 d6 ^= p2[6];
162 d7 ^= p2[7];
163 d0 ^= p3[0];
164 d1 ^= p3[1];
165 d2 ^= p3[2];
166 d3 ^= p3[3];
167 d4 ^= p3[4];
168 d5 ^= p3[5];
169 d6 ^= p3[6];
170 d7 ^= p3[7];
171 p1[0] = d0; /* Store the result (in bursts) */
172 p1[1] = d1;
173 p1[2] = d2;
174 p1[3] = d3;
175 p1[4] = d4;
176 p1[5] = d5;
177 p1[6] = d6;
178 p1[7] = d7;
179 p1 += 8;
180 p2 += 8;
181 p3 += 8;
182 } while (--lines > 0);
183}
184
185static void
186xor_32regs_4(unsigned long bytes, unsigned long *p1, unsigned long *p2,
187 unsigned long *p3, unsigned long *p4)
188{
189 long lines = bytes / (sizeof (long)) / 8;
190
191 do {
192 register long d0, d1, d2, d3, d4, d5, d6, d7;
193 d0 = p1[0]; /* Pull the stuff into registers */
194 d1 = p1[1]; /* ... in bursts, if possible. */
195 d2 = p1[2];
196 d3 = p1[3];
197 d4 = p1[4];
198 d5 = p1[5];
199 d6 = p1[6];
200 d7 = p1[7];
201 d0 ^= p2[0];
202 d1 ^= p2[1];
203 d2 ^= p2[2];
204 d3 ^= p2[3];
205 d4 ^= p2[4];
206 d5 ^= p2[5];
207 d6 ^= p2[6];
208 d7 ^= p2[7];
209 d0 ^= p3[0];
210 d1 ^= p3[1];
211 d2 ^= p3[2];
212 d3 ^= p3[3];
213 d4 ^= p3[4];
214 d5 ^= p3[5];
215 d6 ^= p3[6];
216 d7 ^= p3[7];
217 d0 ^= p4[0];
218 d1 ^= p4[1];
219 d2 ^= p4[2];
220 d3 ^= p4[3];
221 d4 ^= p4[4];
222 d5 ^= p4[5];
223 d6 ^= p4[6];
224 d7 ^= p4[7];
225 p1[0] = d0; /* Store the result (in bursts) */
226 p1[1] = d1;
227 p1[2] = d2;
228 p1[3] = d3;
229 p1[4] = d4;
230 p1[5] = d5;
231 p1[6] = d6;
232 p1[7] = d7;
233 p1 += 8;
234 p2 += 8;
235 p3 += 8;
236 p4 += 8;
237 } while (--lines > 0);
238}
239

Срд 20 Фев 2013 05:24:56
>>43739158
единичные случаи прямого массирования простаты. страпоном такое просто невозможно сделать, у куна жопа раньше треснет.

я не отрицаю влияние массажа простаты на оргазм, но там все намного тоньше, чем просто потыкать страпоном в жопу под рандомным углом.

надо ли говорить, что анальный секс даже среди геев не столь распространен, а предпочитаются иные способы взаимоудовлетворения.

Срд 20 Фев 2013 05:24:58
240static void
241xor_32regs_5(unsigned long bytes, unsigned long *p1, unsigned long *p2,
242 unsigned long *p3, unsigned long *p4, unsigned long *p5)
243{
244 long lines = bytes / (sizeof (long)) / 8;
245
246 do {
247 register long d0, d1, d2, d3, d4, d5, d6, d7;
248 d0 = p1[0]; /* Pull the stuff into registers */
249 d1 = p1[1]; /* ... in bursts, if possible. */
250 d2 = p1[2];
251 d3 = p1[3];
252 d4 = p1[4];
253 d5 = p1[5];
254 d6 = p1[6];
255 d7 = p1[7];
256 d0 ^= p2[0];
257 d1 ^= p2[1];
258 d2 ^= p2[2];
259 d3 ^= p2[3];
260 d4 ^= p2[4];
261 d5 ^= p2[5];
262 d6 ^= p2[6];
263 d7 ^= p2[7];
264 d0 ^= p3[0];
265 d1 ^= p3[1];
266 d2 ^= p3[2];
267 d3 ^= p3[3];
268 d4 ^= p3[4];
269 d5 ^= p3[5];
270 d6 ^= p3[6];
271 d7 ^= p3[7];
272 d0 ^= p4[0];
273 d1 ^= p4[1];
274 d2 ^= p4[2];
275 d3 ^= p4[3];
276 d4 ^= p4[4];
277 d5 ^= p4[5];
278 d6 ^= p4[6];
279 d7 ^= p4[7];
280 d0 ^= p5[0];
281 d1 ^= p5[1];
282 d2 ^= p5[2];
283 d3 ^= p5[3];
284 d4 ^= p5[4];
285 d5 ^= p5[5];
286 d6 ^= p5[6];
287 d7 ^= p5[7];
288 p1[0] = d0; /* Store the result (in bursts) */
289 p1[1] = d1;
290 p1[2] = d2;
291 p1[3] = d3;
292 p1[4] = d4;
293 p1[5] = d5;
294 p1[6] = d6;
295 p1[7] = d7;
296 p1 += 8;
297 p2 += 8;
298 p3 += 8;
299 p4 += 8;
300 p5 += 8;
301 } while (--lines > 0);
302}
303
304static void
305xor_8regs_p_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
306{
307 long lines = bytes / (sizeof (long)) / 8 - 1;
308 prefetchw(p1);
309 prefetch(p2);
310
311 do {
312 prefetchw(p1+8);
313 prefetch(p2+8);
314 once_more:
315 p1[0] ^= p2[0];
316 p1[1] ^= p2[1];
317 p1[2] ^= p2[2];
318 p1[3] ^= p2[3];
319 p1[4] ^= p2[4];
320 p1[5] ^= p2[5];
321 p1[6] ^= p2[6];
322 p1[7] ^= p2[7];
323 p1 += 8;
324 p2 += 8;
325 } while (--lines > 0);
326 if (lines == 0)
327 goto once_more;
328}
329
330static void
331xor_8regs_p_3(unsigned long bytes, unsigned long *p1, unsigned long *p2,
332 unsigned long *p3)
333{
334 long lines = bytes / (sizeof (long)) / 8 - 1;
335 prefetchw(p1);
336 prefetch(p2);
337 prefetch(p3);
338
339 do {
340 prefetchw(p1+8);
341 prefetch(p2+8);
342 prefetch(p3+8);
343 once_more:
344 p1[0] ^= p2[0] ^ p3[0];
345 p1[1] ^= p2[1] ^ p3[1];
346 p1[2] ^= p2[2] ^ p3[2];
347 p1[3] ^= p2[3] ^ p3[3];
348 p1[4] ^= p2[4] ^ p3[4];
349 p1[5] ^= p2[5] ^ p3[5];
350 p1[6] ^= p2[6] ^ p3[6];
351 p1[7] ^= p2[7] ^ p3[7];
352 p1 += 8;
353 p2 += 8;
354 p3 += 8;
355 } while (--lines > 0);
356 if (lines == 0)
357 goto once_more;
358}
359
360static void
361xor_8regs_p_4(unsigned long bytes, unsigned long *p1, unsigned long *p2,
362 unsigned long *p3, unsigned long *p4)
363{
364 long lines = bytes / (sizeof (long)) / 8 - 1;
365
366 prefetchw(p1);
367 prefetch(p2);
368 prefetch(p3);
369 prefetch(p4);
370
371 do {
372 prefetchw(p1+8);
373 prefetch(p2+8);
374 prefetch(p3+8);
375 prefetch(p4+8);
376 once_more:
377 p1[0] ^= p2[0] ^ p3[0] ^ p4[0];
378 p1[1] ^= p2[1] ^ p3[1] ^ p4[1];
379 p1[2] ^= p2[2] ^ p3[2] ^ p4[2];
380 p1[3] ^= p2[3] ^ p3[3] ^ p4[3];
381 p1[4] ^= p2[4] ^ p3[4] ^ p4[4];
382 p1[5] ^= p2[5] ^ p3[5] ^ p4[5];
383 p1[6] ^= p2[6] ^ p3[6] ^ p4[6];
384 p1[7] ^= p2[7] ^ p3[7] ^ p4[7];
385 p1 += 8;
386 p2 += 8;
387 p3 += 8;
388 p4 += 8;
389 } while (--lines > 0);
390 if (lines == 0)
391 goto once_more;
392}
393
394static void
395xor_8regs_p_5(unsigned long bytes, unsigned long *p1, unsigned long *p2,
396 unsigned long *p3, unsigned long *p4, unsigned long *p5)
397{
398 long lines = bytes / (sizeof (long)) / 8 - 1;
399
400 prefetchw(p1);
401 prefetch(p2);
402 prefetch(p3);
403 prefetch(p4);
404 prefetch(p5);
405
406 do {
407 prefetchw(p1+8);
408 prefetch(p2+8);
409 prefetch(p3+8);
410 prefetch(p4+8);
411 prefetch(p5+8);
412 once_more:
413 p1[0] ^= p2[0] ^ p3[0] ^ p4[0] ^ p5[0];
414 p1[1] ^= p2[1] ^ p3[1] ^ p4[1] ^ p5[1];
415 p1[2] ^= p2[2] ^ p3[2] ^ p4[2] ^ p5[2];
416 p1[3] ^= p2[3] ^ p3[3] ^ p4[3] ^ p5[3];
417 p1[4] ^= p2[4] ^ p3[4] ^ p4[4] ^ p5[4];
418 p1[5] ^= p2[5] ^ p3[5] ^ p4[5] ^ p5[5];
419 p1[6] ^= p2[6] ^ p3[6] ^ p4[6] ^ p5[6];
420 p1[7] ^= p2[7] ^ p3[7] ^ p4[7] ^ p5[7];
421 p1 += 8;
422 p2 += 8;
423 p3 += 8;
424 p4 += 8;
425 p5 += 8;
426 } while (--lines > 0);
427 if (lines == 0)
428 goto once_more;
429}
430
431static void
432xor_32regs_p_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
433{
434 long lines = bytes / (sizeof (long)) / 8 - 1;
435
436 prefetchw(p1);
437 prefetch(p2);
438
439 do {
440 register long d0, d1, d2, d3, d4, d5, d6, d7;
441
442 prefetchw(p1+8);
443 prefetch(p2+8);
444 once_more:
445 d0 = p1[0]; /* Pull the stuff into registers */
446 d1 = p1[1]; /* ... in bursts, if possible. */
447 d2 = p1[2];
448 d3 = p1[3];
449 d4 = p1[4];
450 d5 = p1[5];
451 d6 = p1[6];
452 d7 = p1[7];
453 d0 ^= p2[0];
454 d1 ^= p2[1];
455 d2 ^= p2[2];
456 d3 ^= p2[3];
457 d4 ^= p2[4];
458 d5 ^= p2[5];
459 d6 ^= p2[6];
460 d7 ^= p2[7];
461 p1[0] = d0; /* Store the result (in bursts) */
462 p1[1] = d1;
463 p1[2] = d2;
464 p1[3] = d3;
465 p1[4] = d4;
466 p1[5] = d5;
467 p1[6] = d6;
468 p1[7] = d7;
469 p1 += 8;
470 p2 += 8;
471 } while (--lines > 0);
472 if (lines == 0)
473 goto once_more;
474}
475

Срд 20 Фев 2013 05:25:52
ОП-тян, на видео гнилозубка vs хорский тоже шликаешь?

Срд 20 Фев 2013 05:25:59
476static void
477xor_32regs_p_3(unsigned long bytes, unsigned long *p1, unsigned long *p2,
478 unsigned long *p3)
479{
480 long lines = bytes / (sizeof (long)) / 8 - 1;
481
482 prefetchw(p1);
483 prefetch(p2);
484 prefetch(p3);
485
486 do {
487 register long d0, d1, d2, d3, d4, d5, d6, d7;
488
489 prefetchw(p1+8);
490 prefetch(p2+8);
491 prefetch(p3+8);
492 once_more:
493 d0 = p1[0]; /* Pull the stuff into registers */
494 d1 = p1[1]; /* ... in bursts, if possible. */
495 d2 = p1[2];
496 d3 = p1[3];
497 d4 = p1[4];
498 d5 = p1[5];
499 d6 = p1[6];
500 d7 = p1[7];
501 d0 ^= p2[0];
502 d1 ^= p2[1];
503 d2 ^= p2[2];
504 d3 ^= p2[3];
505 d4 ^= p2[4];
506 d5 ^= p2[5];
507 d6 ^= p2[6];
508 d7 ^= p2[7];
509 d0 ^= p3[0];
510 d1 ^= p3[1];
511 d2 ^= p3[2];
512 d3 ^= p3[3];
513 d4 ^= p3[4];
514 d5 ^= p3[5];
515 d6 ^= p3[6];
516 d7 ^= p3[7];
517 p1[0] = d0; /* Store the result (in bursts) */
518 p1[1] = d1;
519 p1[2] = d2;
520 p1[3] = d3;
521 p1[4] = d4;
522 p1[5] = d5;
523 p1[6] = d6;
524 p1[7] = d7;
525 p1 += 8;
526 p2 += 8;
527 p3 += 8;
528 } while (--lines > 0);
529 if (lines == 0)
530 goto once_more;
531}
532
533static void
534xor_32regs_p_4(unsigned long bytes, unsigned long *p1, unsigned long *p2,
535 unsigned long *p3, unsigned long *p4)
536{
537 long lines = bytes / (sizeof (long)) / 8 - 1;
538
539 prefetchw(p1);
540 prefetch(p2);
541 prefetch(p3);
542 prefetch(p4);
543
544 do {
545 register long d0, d1, d2, d3, d4, d5, d6, d7;
546
547 prefetchw(p1+8);
548 prefetch(p2+8);
549 prefetch(p3+8);
550 prefetch(p4+8);
551 once_more:
552 d0 = p1[0]; /* Pull the stuff into registers */
553 d1 = p1[1]; /* ... in bursts, if possible. */
554 d2 = p1[2];
555 d3 = p1[3];
556 d4 = p1[4];
557 d5 = p1[5];
558 d6 = p1[6];
559 d7 = p1[7];
560 d0 ^= p2[0];
561 d1 ^= p2[1];
562 d2 ^= p2[2];
563 d3 ^= p2[3];
564 d4 ^= p2[4];
565 d5 ^= p2[5];
566 d6 ^= p2[6];
567 d7 ^= p2[7];
568 d0 ^= p3[0];
569 d1 ^= p3[1];
570 d2 ^= p3[2];
571 d3 ^= p3[3];
572 d4 ^= p3[4];
573 d5 ^= p3[5];
574 d6 ^= p3[6];
575 d7 ^= p3[7];
576 d0 ^= p4[0];
577 d1 ^= p4[1];
578 d2 ^= p4[2];
579 d3 ^= p4[3];
580 d4 ^= p4[4];
581 d5 ^= p4[5];
582 d6 ^= p4[6];
583 d7 ^= p4[7];
584 p1[0] = d0; /* Store the result (in bursts) */
585 p1[1] = d1;
586 p1[2] = d2;
587 p1[3] = d3;
588 p1[4] = d4;
589 p1[5] = d5;
590 p1[6] = d6;
591 p1[7] = d7;
592 p1 += 8;
593 p2 += 8;
594 p3 += 8;
595 p4 += 8;
596 } while (--lines > 0);
597 if (lines == 0)
598 goto once_more;
599}
600
601static void
602xor_32regs_p_5(unsigned long bytes, unsigned long *p1, unsigned long *p2,
603 unsigned long *p3, unsigned long *p4, unsigned long *p5)
604{
605 long lines = bytes / (sizeof (long)) / 8 - 1;
606
607 prefetchw(p1);
608 prefetch(p2);
609 prefetch(p3);
610 prefetch(p4);
611 prefetch(p5);
612
613 do {
614 register long d0, d1, d2, d3, d4, d5, d6, d7;
615
616 prefetchw(p1+8);
617 prefetch(p2+8);
618 prefetch(p3+8);
619 prefetch(p4+8);
620 prefetch(p5+8);
621 once_more:
622 d0 = p1[0]; /* Pull the stuff into registers */
623 d1 = p1[1]; /* ... in bursts, if possible. */
624 d2 = p1[2];
625 d3 = p1[3];
626 d4 = p1[4];
627 d5 = p1[5];
628 d6 = p1[6];
629 d7 = p1[7];
630 d0 ^= p2[0];
631 d1 ^= p2[1];
632 d2 ^= p2[2];
633 d3 ^= p2[3];
634 d4 ^= p2[4];
635 d5 ^= p2[5];
636 d6 ^= p2[6];
637 d7 ^= p2[7];
638 d0 ^= p3[0];
639 d1 ^= p3[1];
640 d2 ^= p3[2];
641 d3 ^= p3[3];
642 d4 ^= p3[4];
643 d5 ^= p3[5];
644 d6 ^= p3[6];
645 d7 ^= p3[7];
646 d0 ^= p4[0];
647 d1 ^= p4[1];
648 d2 ^= p4[2];
649 d3 ^= p4[3];
650 d4 ^= p4[4];
651 d5 ^= p4[5];
652 d6 ^= p4[6];
653 d7 ^= p4[7];
654 d0 ^= p5[0];
655 d1 ^= p5[1];
656 d2 ^= p5[2];
657 d3 ^= p5[3];
658 d4 ^= p5[4];
659 d5 ^= p5[5];
660 d6 ^= p5[6];
661 d7 ^= p5[7];
662 p1[0] = d0; /* Store the result (in bursts) */
663 p1[1] = d1;
664 p1[2] = d2;
665 p1[3] = d3;
666 p1[4] = d4;
667 p1[5] = d5;
668 p1[6] = d6;
669 p1[7] = d7;
670 p1 += 8;
671 p2 += 8;
672 p3 += 8;
673 p4 += 8;
674 p5 += 8;
675 } while (--lines > 0);
676 if (lines == 0)
677 goto once_more;
678}

Срд 20 Фев 2013 05:28:22
>>43739181
Человечество не умрёт никогда. У него есть шлюхи.

Срд 20 Фев 2013 05:29:03
>>43739181
Я не уверен что оно настанет для человечества. Либо нужен БП для встряски, либо мы сами себя кончим. А вообще надо понять что же такое время. Поймем - сможем и в нем как-то иначе себя вести. Но тут снова проблема, ведь среднему двачеру важнее айфончик. А не проблемы выживания человека и планеты, например

Срд 20 Фев 2013 05:29:16
>>43739214
>надо ли говорить, что анальный секс даже среди геев не столь распространен, а предпочитаются иные способы взаимоудовлетворения.

Срд 20 Фев 2013 05:30:21
R linux/include/asm-generic/syscall.h

Search Prefs
1/*
2 * Access to user system call parameters and results
3 *
4 * Copyright (C) 2008-2009 Red Hat, Inc. All rights reserved.
5 *
6 * This copyrighted material is made available to anyone wishing to use,
7 * modify, copy, or redistribute it subject to the terms and conditions
8 * of the GNU General Public License v.2.
9 *
10 * This file is a stub providing documentation for what functions
11 * asm-ARCH/syscall.h files need to define. Most arch definitions
12 * will be simple inlines.
13 *
14 * All of these functions expect to be called with no locks,
15 * and only when the caller is sure that the task of interest
16 * cannot return to user mode while we are looking at it.
17 */
18
19#ifndef _ASM_SYSCALL_H
20#define _ASM_SYSCALL_H 1
21
22struct task_struct;
23struct pt_regs;
24
25/**
26 * syscall_get_nr - find what system call a task is executing
27 * @task: task of interest, must be blocked
28 * @regs: task_pt_regs() of @task
29 *
30 * If @task is executing a system call or is at system call
31 * tracing about to attempt one, returns the system call number.
32 * If @task is not executing a system call, i.e. it&amp;#39;s blocked
33 * inside the kernel for a fault or signal, returns -1.
34 *
35 * Note this returns int even on 64-bit machines. Only 32 bits of
36 * system call number can be meaningful. If the actual arch value
37 * is 64 bits, this truncates to 32 bits so 0xffffffff means -1.
38 *
39 * It&amp;#39;s only valid to call this when @task is known to be blocked.
40 */
41int syscall_get_nr(struct task_struct *task, struct pt_regs *regs);
42
43/**
44 * syscall_rollback - roll back registers after an aborted system call
45 * @task: task of interest, must be in system call exit tracing
46 * @regs: task_pt_regs() of @task
47 *
48 * It&amp;#39;s only valid to call this when @task is stopped for system
49 * call exit tracing (due to TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT),
50 * after tracehook_report_syscall_entry() returned nonzero to prevent
51 * the system call from taking place.
52 *
53 * This rolls back the register state in @regs so it&amp;#39;s as if the
54 * system call instruction was a no-op. The registers containing
55 * the system call number and arguments are as they were before the
56 * system call instruction. This may not be the same as what the
57 * register state looked like at system call entry tracing.
58 */
59void syscall_rollback(struct task_struct *task, struct pt_regs *regs);
60
61/**
62 * syscall_get_error - check result of traced system call
63 * @task: task of interest, must be blocked
64 * @regs: task_pt_regs() of @task
65 *
66 * Returns 0 if the system call succeeded, or -ERRORCODE if it failed.
67 *
68 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
69 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
70 */
71long syscall_get_error(struct task_struct *task, struct pt_regs *regs);
72
73/**
74 * syscall_get_return_value - get the return value of a traced system call
75 * @task: task of interest, must be blocked
76 * @regs: task_pt_regs() of @task
77 *
78 * Returns the return value of the successful system call.
79 * This value is meaningless if syscall_get_error() returned nonzero.
80 *
81 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
82 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
83 */
84long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs);
85
86/**
87 * syscall_set_return_value - change the return value of a traced system call
88 * @task: task of interest, must be blocked
89 * @regs: task_pt_regs() of @task
90 * @error: negative error code, or zero to indicate success
91 * @val: user return value if @error is zero
92 *
93 * This changes the results of the system call that user mode will see.
94 * If @error is zero, the user sees a successful system call with a
95 * return value of @val. If @error is nonzero, it&amp;#39;s a negated errno
96 * code; the user sees a failed system call with this errno code.
97 *
98 * It&amp;#39;s only valid to call this when @task is stopped for tracing on exit
99 * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
100 */
101void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs,
102 int error, long val);
103
104/**
105 * syscall_get_arguments - extract system call parameter values
106 * @task: task of interest, must be blocked
107 * @regs: task_pt_regs() of @task
108 * @i: argument index [0,5]
109 * @n: number of arguments; n+i must be [1,6].
110 * @args: array filled with argument values
111 *
112 * Fetches @n arguments to the system call starting with the @i&amp;#39;th argument
113 * (from 0 through 5). Argument @i is stored in @args[0], and so on.
114 * An arch inline version is probably optimal when @i and @n are constants.
115 *
116 * It&amp;#39;s only valid to call this when @task is stopped for tracing on
117 * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
118 * It&amp;#39;s invalid to call this with @i + @n > 6; we only support system calls
119 * taking up to 6 arguments.
120 */
121void syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
122 unsigned int i, unsigned int n, unsigned long *args);
123
124/**
125 * syscall_set_arguments - change system call parameter value
126 * @task: task of interest, must be in system call entry tracing
127 * @regs: task_pt_regs() of @task
128 * @i: argument index [0,5]
129 * @n: number of arguments; n+i must be [1,6].
130 * @args: array of argument values to store
131 *
132 * Changes @n arguments to the system call starting with the @i&amp;#39;th argument.
133 * Argument @i gets value @args[0], and so on.
134 * An arch inline version is probably optimal when @i and @n are constants.
135 *
136 * It&amp;#39;s only valid to call this when @task is stopped for tracing on
137 * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
138 * It&amp;#39;s invalid to call this with @i + @n > 6; we only support system calls
139 * taking up to 6 arguments.
140 */
141void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
142 unsigned int i, unsigned int n,
143 const unsigned long *args);
144
145/**
146 * syscall_get_arch - return the AUDIT_ARCH for the current system call
147 * @task: task of interest, must be in system call entry tracing
148 * @regs: task_pt_regs() of @task
149 *
150 * Returns the AUDIT_ARCH_* based on the system call convention in use.
151 *
152 * It&amp;#39;s only valid to call this when @task is stopped on entry to a system
153 * call, due to %TIF_SYSCALL_TRACE, %TIF_SYSCALL_AUDIT, or %TIF_SECCOMP.
154 *
155 * Architectures which permit CONFIG_HAVE_ARCH_SECCOMP_FILTER must
156 * provide an implementation of this.
157 */
158int syscall_get_arch(struct task_struct *task, struct pt_regs *regs);
159#endif /* _ASM_SYSCALL_H */

Срд 20 Фев 2013 05:31:18
>>43739261
>существует распространённое заблуждение о том, что все мужские гомосексуальные пары занимаются анальным сексом. Доктор медицинских наук Г. Б. Дерягин отмечает, что до 25% гомосексуальных мужчин не совершают анальных сношений, предпочитая другие формы сексуального взаимоудовлетворения, в первую очередь оральногенитальные контакты и взаимную маструбацию. Аналогичные данные приводит в своих работах И. С. Кон.

>Согласно проведённым в 2011 году усилиями учёных Университета Индианы и Университета Джорджа Мейсона, лишь менее 40% респондентов-геев занимались анальным сексом во время последнего полового акта.

Срд 20 Фев 2013 05:31:18
>>43739214

слушай, пиздуй-ка с двачей, а? ты вообще понятия не имеешь о чем говоришь.
хуже диванного пидараса, может быть только диванный политик-обозреватель.

Срд 20 Фев 2013 05:32:14
LXR linux/include/math-emu/op-common.h

Search Prefs
1/* Software floating-point emulation. Common operations.
2 Copyright (C) 1997,1998,1999 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Richard Henderson (rth@cygnus.com),
5 Jakub Jelinek (jj@ultra.linux.cz),
6 David S. Miller (davem@redhat.com) and
7 Peter Maydell (pmaydell@chiark.greenend.org.uk).
8
9 The GNU C Library is free software; you can redistribute it and/or
10 modify it under the terms of the GNU Library General Public License as
11 published by the Free Software Foundation; either version 2 of the
12 License, or (at your option) any later version.
13
14 The GNU C Library is distributed in the hope that it will be useful,
15 but WITHOUT ANY WARRANTY; without even the implied warranty of
16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 Library General Public License for more details.
18
19 You should have received a copy of the GNU Library General Public
20 License along with the GNU C Library; see the file COPYING.LIB. If
21 not, write to the Free Software Foundation, Inc.,
22 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */
23
24#ifndef MATH_EMU_OP_COMMON_H
25#define MATH_EMU_OP_COMMON_H
26
27#define _FP_DECL(wc, X) \
28 _FP_I_TYPE X##_c=0, X##_s=0, X##_e=0; \
29 _FP_FRAC_DECL_##wc(X)
30
31/*
32 * Finish truly unpacking a native fp value by classifying the kind
33 * of fp value and normalizing both the exponent and the fraction.
34 */
35
36#define _FP_UNPACK_CANONICAL(fs, wc, X) \
37do { \
38 switch (X##_e) \
39 { \
40 default: \
41 _FP_FRAC_HIGH_RAW_##fs(X) = _FP_IMPLBIT_##fs; \
42 _FP_FRAC_SLL_##wc(X, _FP_WORKBITS); \
43 X##_e -= _FP_EXPBIAS_##fs; \
44 X##_c = FP_CLS_NORMAL; \
45 break; \
46 \
47 case 0: \
48 if (_FP_FRAC_ZEROP_##wc(X)) \
49 X##_c = FP_CLS_ZERO; \
50 else \
51 { \
52 /* a denormalized number */ \
53 _FP_I_TYPE _shift; \
54 _FP_FRAC_CLZ_##wc(_shift, X); \
55 _shift -= _FP_FRACXBITS_##fs; \
56 _FP_FRAC_SLL_##wc(X, (_shift+_FP_WORKBITS)); \
57 X##_e -= _FP_EXPBIAS_##fs - 1 + _shift; \
58 X##_c = FP_CLS_NORMAL; \
59 FP_SET_EXCEPTION(FP_EX_DENORM); \
60 if (FP_DENORM_ZERO) \
61 { \
62 FP_SET_EXCEPTION(FP_EX_INEXACT); \
63 X##_c = FP_CLS_ZERO; \
64 } \
65 } \
66 break; \
67 \
68 case _FP_EXPMAX_##fs: \
69 if (_FP_FRAC_ZEROP_##wc(X)) \
70 X##_c = FP_CLS_INF; \
71 else \
72 { \
73 X##_c = FP_CLS_NAN; \
74 /* Check for signaling NaN */ \
75 if (!(_FP_FRAC_HIGH_RAW_##fs(X) &amp; _FP_QNANBIT_##fs)) \
76 FP_SET_EXCEPTION(FP_EX_INVALID FP_EX_INVALID_SNAN); \
77 } \
78 break; \
79 } \
80} while (0)
81
82/*
83 * Before packing the bits back into the native fp result, take care
84 * of such mundane things as rounding and overflow. Also, for some
85 * kinds of fp values, the original parts may not have been fully
86 * extracted -- but that is ok, we can regenerate them now.
87 */
88
89#define _FP_PACK_CANONICAL(fs, wc, X) \
90do { \
91 switch (X##_c) \
92 { \
93 case FP_CLS_NORMAL: \
94 X##_e += _FP_EXPBIAS_##fs; \
95 if (X##_e > 0) \
96 { \
97 _FP_ROUND(wc, X); \
98 if (_FP_FRAC_OVERP_##wc(fs, X)) \
99 { \
100 _FP_FRAC_CLEAR_OVERP_##wc(fs, X); \
101 X##_e++; \
102 } \
103 _FP_FRAC_SRL_##wc(X, _FP_WORKBITS); \
104 if (X##_e >= _FP_EXPMAX_##fs) \
105 { \
106 /* overflow */ \
107 switch (FP_ROUNDMODE) \
108 { \
109 case FP_RND_NEAREST: \
110 X##_c = FP_CLS_INF; \
111 break; \
112 case FP_RND_PINF: \
113 if (!X##_s) X##_c = FP_CLS_INF; \
114 break; \
115 case FP_RND_MINF: \
116 if (X##_s) X##_c = FP_CLS_INF; \
117 break; \
118 } \
119 if (X##_c == FP_CLS_INF) \
120 { \
121 /* Overflow to infinity */ \
122 X##_e = _FP_EXPMAX_##fs; \
123 _FP_FRAC_SET_##wc(X, _FP_ZEROFRAC_##wc); \
124 } \
125 else \
126 { \
127 /* Overflow to maximum normal */ \
128 X##_e = _FP_EXPMAX_##fs - 1; \
129 _FP_FRAC_SET_##wc(X, _FP_MAXFRAC_##wc); \
130 } \
131 FP_SET_EXCEPTION(FP_EX_OVERFLOW); \
132 FP_SET_EXCEPTION(FP_EX_INEXACT); \
133 } \
134 } \
135 else \
136 { \
137 /* we&amp;#39;ve got a denormalized number */ \
138 X##_e = -X##_e + 1; \
139 if (X##_e <= _FP_WFRACBITS_##fs) \
140 { \
141 _FP_FRAC_SRS_##wc(X, X##_e, _FP_WFRACBITS_##fs); \
142 if (_FP_FRAC_HIGH_##fs(X) \
143 &amp; (_FP_OVERFLOW_##fs >> 1)) \
144 { \
145 X##_e = 1; \
146 _FP_FRAC_SET_##wc(X, _FP_ZEROFRAC_##wc); \
147 } \
148 else \
149 { \
150 _FP_ROUND(wc, X); \
151 if (_FP_FRAC_HIGH_##fs(X) \
152 &amp; (_FP_OVERFLOW_##fs >> 1)) \
153 { \
154 X##_e = 1; \
155 _FP_FRAC_SET_##wc(X, _FP_ZEROFRAC_##wc); \
156 FP_SET_EXCEPTION(FP_EX_INEXACT); \
157 } \
158 else \
159 { \
160 X##_e = 0; \
161 _FP_FRAC_SRL_##wc(X, _FP_WORKBITS); \
162 } \
163 } \
164 if ((FP_CUR_EXCEPTIONS &amp; FP_EX_INEXACT) \
165 (FP_TRAPPING_EXCEPTIONS &amp; FP_EX_UNDERFLOW)) \
166 FP_SET_EXCEPTION(FP_EX_UNDERFLOW); \
167 } \
168 else \
169 { \
170 /* underflow to zero */ \
171 X##_e = 0; \
172 if (!_FP_FRAC_ZEROP_##wc(X)) \
173 { \
174 _FP_FRAC_SET_##wc(X, _FP_MINFRAC_##wc); \
175 _FP_ROUND(wc, X); \
176 _FP_FRAC_LOW_##wc(X) >>= (_FP_WORKBITS); \
177 } \
178 FP_SET_EXCEPTION(FP_EX_UNDERFLOW); \
179 } \
180 } \
181 break; \
182 \
183 case FP_CLS_ZERO: \
184 X##_e = 0; \
185 _FP_FRAC_SET_##wc(X, _FP_ZEROFRAC_##wc); \
186 break; \
187 \
188 case FP_CLS_INF: \
189 X##_e = _FP_EXPMAX_##fs; \
190 _FP_FRAC_SET_##wc(X, _FP_ZEROFRAC_##wc); \
191 break; \
192 \
193 case FP_CLS_NAN: \
194 X##_e = _FP_EXPMAX_##fs; \
195 if (!_FP_KEEPNANFRACP) \
196 { \
197 _FP_FRAC_SET_##wc(X, _FP_NANFRAC_##fs); \
198 X##_s = _FP_NANSIGN_##fs; \
199 } \
200 else \
201 _FP_FRAC_HIGH_RAW_##fs(X) = _FP_QNANBIT_##fs; \
202 break; \
203 } \
204} while (0)
205
206/* This one accepts raw argument and not cooked, returns
207 * 1 if X is a signaling NaN.
208 */
209#define _FP_ISSIGNAN(fs, wc, X) \
210({ \
211 int __ret = 0; \
212 if (X##_e == _FP_EXPMAX_##fs) \
213 { \
214 if (!_FP_FRAC_ZEROP_##wc(X) \
215 &amp;&amp; !(_FP_FRAC_HIGH_RAW_##fs(X) &amp; _FP_QNANBIT_##fs)) \
216 __ret = 1; \
217 } \
218 __ret; \
219})
220
221
222
223
224
225/*
226 * Main addition routine. The input values should be cooked.
227 */
228
229#define _FP_ADD_INTERNAL(fs, wc, R, X, Y, OP) \
230do { \
231 switch (_FP_CLS_COMBINE(X##_c, Y##_c)) \
232 { \
233 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NORMAL): \
234 { \
235 /* shift the smaller number so that its expon

Срд 20 Фев 2013 05:33:05
>>43739286
я совсем забыл, что тут сидят сплошные гуманитарии не могущие в банальную анатомию.
странно что в святой дух не верите. у вас же небось и оргазм происходит только по большой любви, а не из-за физиологических причин.

Срд 20 Фев 2013 05:33:12
305 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NAN): \
306 _FP_CHOOSENAN(fs, wc, R, X, Y, OP); \
307 break; \
308 \
309 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_ZERO): \
310 R##_e = X##_e; \
311 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NORMAL): \
312 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_INF): \
313 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_ZERO): \
314 _FP_FRAC_COPY_##wc(R, X); \
315 R##_s = X##_s; \
316 R##_c = X##_c; \
317 break; \
318 \
319 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NORMAL): \
320 R##_e = Y##_e; \
321 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NAN): \
322 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NAN): \
323 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NAN): \
324 _FP_FRAC_COPY_##wc(R, Y); \
325 R##_s = Y##_s; \
326 R##_c = Y##_c; \
327 break; \
328 \
329 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_INF): \
330 if (X##_s != Y##_s) \
331 { \
332 /* +INF + -INF => NAN */ \
333 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
334 R##_s = _FP_NANSIGN_##fs; \
335 R##_c = FP_CLS_NAN; \
336 FP_SET_EXCEPTION(FP_EX_INVALID FP_EX_INVALID_ISI); \
337 break; \
338 } \
339 /* FALLTHRU */ \
340 \
341 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NORMAL): \
342 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_ZERO): \
343 R##_s = X##_s; \
344 R##_c = FP_CLS_INF; \
345 break; \
346 \
347 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_INF): \
348 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_INF): \
349 R##_s = Y##_s; \
350 R##_c = FP_CLS_INF; \
351 break; \
352 \
353 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_ZERO): \
354 /* make sure the sign is correct */ \
355 if (FP_ROUNDMODE == FP_RND_MINF) \
356 R##_s = X##_s Y##_s; \
357 else \
358 R##_s = X##_s &amp; Y##_s; \
359 R##_c = FP_CLS_ZERO; \
360 break; \
361 \
362 default: \
363 abort(); \
364 } \
365} while (0)
366
367#define _FP_ADD(fs, wc, R, X, Y) _FP_ADD_INTERNAL(fs, wc, R, X, Y, &amp;#39;+&amp;#39;)
368#define _FP_SUB(fs, wc, R, X, Y) \
369 do { \
370 if (Y##_c != FP_CLS_NAN) Y##_s ^= 1; \
371 _FP_ADD_INTERNAL(fs, wc, R, X, Y, &amp;#39;-&amp;#39;); \
372 } while (0)
373
374
375/*
376 * Main negation routine. FIXME -- when we care about setting exception
377 * bits reliably, this will not do. We should examine all of the fp classes.
378 */
379
380#define _FP_NEG(fs, wc, R, X) \
381 do { \
382 _FP_FRAC_COPY_##wc(R, X); \
383 R##_c = X##_c; \
384 R##_e = X##_e; \
385 R##_s = 1 ^ X##_s; \
386 } while (0)
387
388
389/*
390 * Main multiplication routine. The input values should be cooked.
391 */
392
393#define _FP_MUL(fs, wc, R, X, Y) \
394do { \
395 R##_s = X##_s ^ Y##_s; \
396 switch (_FP_CLS_COMBINE(X##_c, Y##_c)) \
397 { \
398 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NORMAL): \
399 R##_c = FP_CLS_NORMAL; \
400 R##_e = X##_e + Y##_e + 1; \
401 \
402 _FP_MUL_MEAT_##fs(R,X,Y); \
403 \
404 if (_FP_FRAC_OVERP_##wc(fs, R)) \
405 _FP_FRAC_SRS_##wc(R, 1, _FP_WFRACBITS_##fs); \
406 else \
407 R##_e--; \
408 break; \
409 \
410 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NAN): \
411 _FP_CHOOSENAN(fs, wc, R, X, Y, &amp;#39;*&amp;#39;); \
412 break; \
413 \
414 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NORMAL): \
415 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_INF): \
416 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_ZERO): \
417 R##_s = X##_s; \
418 \
419 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_INF): \
420 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NORMAL): \
421 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NORMAL): \
422 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_ZERO): \
423 _FP_FRAC_COPY_##wc(R, X); \
424 R##_c = X##_c; \
425 break; \
426 \
427 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NAN): \
428 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NAN): \
429 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NAN): \
430 R##_s = Y##_s; \
431 \
432 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_INF): \
433 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_ZERO): \
434 _FP_FRAC_COPY_##wc(R, Y); \
435 R##_c = Y##_c; \
436 break; \
437 \
438 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_ZERO): \
439 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_INF): \
440 R##_s = _FP_NANSIGN_##fs; \
441 R##_c = FP_CLS_NAN; \
442 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
443 FP_SET_EXCEPTION(FP_EX_INVALID FP_EX_INVALID_IMZ);\
444 break; \
445 \
446 default: \
447 abort(); \
448 } \
449} while (0)
450
451
452/*
453 * Main division routine. The input values should be cooked.
454 */
455
456#define _FP_DIV(fs, wc, R, X, Y) \
457do { \
458 R##_s = X##_s ^ Y##_s; \
459 switch (_FP_CLS_COMBINE(X##_c, Y##_c)) \
460 { \
461 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NORMAL): \
462 R##_c = FP_CLS_NORMAL; \
463 R##_e = X##_e - Y##_e; \
464 \
465 _FP_DIV_MEAT_##fs(R,X,Y); \
466 break; \
467 \
468 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NAN): \
469 _FP_CHOOSENAN(fs, wc, R, X, Y, &amp;#39;/&amp;#39;); \
470 break; \
471 \
472 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_NORMAL): \
473 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_INF): \
474 case _FP_CLS_COMBINE(FP_CLS_NAN,FP_CLS_ZERO): \
475 R##_s = X##_s; \
476 _FP_FRAC_COPY_##wc(R, X); \
477 R##_c = X##_c; \
478 break; \
479 \
480 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_NAN): \
481 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NAN): \
482 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NAN): \
483 R##_s = Y##_s; \
484 _FP_FRAC_COPY_##wc(R, Y); \
485 R##_c = Y##_c; \
486 break; \
487 \
488 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_INF): \
489 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_INF): \
490 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_NORMAL): \
491 R##_c = FP_CLS_ZERO; \
492 break; \
493 \
494 case _FP_CLS_COMBINE(FP_CLS_NORMAL,FP_CLS_ZERO): \
495 FP_SET_EXCEPTION(FP_EX_DIVZERO); \
496 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_ZERO): \
497 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_NORMAL): \
498 R##_c = FP_CLS_INF; \
499 break; \
500 \
501 case _FP_CLS_COMBINE(FP_CLS_INF,FP_CLS_INF): \
502 R##_s = _FP_NANSIGN_##fs; \
503 R##_c = FP_CLS_NAN; \
504 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
505 FP_SET_EXCEPTION(FP_EX_INVALID FP_EX_INVALID_IDI);\
506 break; \
507 \
508 case _FP_CLS_COMBINE(FP_CLS_ZERO,FP_CLS_ZERO): \
509 R##_s = _FP_NANSIGN_##fs; \
510 R##_c = FP_CLS_NAN; \
511 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
512 FP_SET_EXCEPTION(FP_EX_INVALID FP_EX_INVALID_ZDZ);\
513 break; \
514 \
515 default: \
516 abort(); \
517 } \
518} while (0)
519
520

Срд 20 Фев 2013 05:34:22
521/*
522 * Main differential comparison routine. The inputs should be raw not
523 * cooked. The return is -1,0,1 for normal values, 2 otherwise.
524 */
525
526#define _FP_CMP(fs, wc, ret, X, Y, un) \
527 do { \
528 /* NANs are unordered */ \
529 if ((X##_e == _FP_EXPMAX_##fs &amp;&amp; !_FP_FRAC_ZEROP_##wc(X)) \
530 (Y##_e == _FP_EXPMAX_##fs &amp;&amp; !_FP_FRAC_ZEROP_##wc(Y))) \
531 { \
532 ret = un; \
533 } \
534 else \
535 { \
536 int __is_zero_x; \
537 int __is_zero_y; \
538 \
539 __is_zero_x = (!X##_e &amp;&amp; _FP_FRAC_ZEROP_##wc(X)) ? 1 : 0; \
540 __is_zero_y = (!Y##_e &amp;&amp; _FP_FRAC_ZEROP_##wc(Y)) ? 1 : 0; \
541 \
542 if (__is_zero_x &amp;&amp; __is_zero_y) \
543 ret = 0; \
544 else if (__is_zero_x) \
545 ret = Y##_s ? 1 : -1; \
546 else if (__is_zero_y) \
547 ret = X##_s ? -1 : 1; \
548 else if (X##_s != Y##_s) \
549 ret = X##_s ? -1 : 1; \
550 else if (X##_e > Y##_e) \
551 ret = X##_s ? -1 : 1; \
552 else if (X##_e < Y##_e) \
553 ret = X##_s ? 1 : -1; \
554 else if (_FP_FRAC_GT_##wc(X, Y)) \
555 ret = X##_s ? -1 : 1; \
556 else if (_FP_FRAC_GT_##wc(Y, X)) \
557 ret = X##_s ? 1 : -1; \
558 else \
559 ret = 0; \
560 } \
561 } while (0)
562
563
564/* Simplification for strict equality. */
565
566#define _FP_CMP_EQ(fs, wc, ret, X, Y) \
567 do { \
568 /* NANs are unordered */ \
569 if ((X##_e == _FP_EXPMAX_##fs &amp;&amp; !_FP_FRAC_ZEROP_##wc(X)) \
570 (Y##_e == _FP_EXPMAX_##fs &amp;&amp; !_FP_FRAC_ZEROP_##wc(Y))) \
571 { \
572 ret = 1; \
573 } \
574 else \
575 { \
576 ret = !(X##_e == Y##_e \
577 &amp;&amp; _FP_FRAC_EQ_##wc(X, Y) \
578 &amp;&amp; (X##_s == Y##_s !X##_e &amp;&amp; _FP_FRAC_ZEROP_##wc(X))); \
579 } \
580 } while (0)
581
582/*
583 * Main square root routine. The input value should be cooked.
584 */
585
586#define _FP_SQRT(fs, wc, R, X) \
587do { \
588 _FP_FRAC_DECL_##wc(T); _FP_FRAC_DECL_##wc(S); \
589 _FP_W_TYPE q; \
590 switch (X##_c) \
591 { \
592 case FP_CLS_NAN: \
593 _FP_FRAC_COPY_##wc(R, X); \
594 R##_s = X##_s; \
595 R##_c = FP_CLS_NAN; \
596 break; \
597 case FP_CLS_INF: \
598 if (X##_s) \
599 { \
600 R##_s = _FP_NANSIGN_##fs; \
601 R##_c = FP_CLS_NAN; /* NAN */ \
602 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
603 FP_SET_EXCEPTION(FP_EX_INVALID); \
604 } \
605 else \
606 { \
607 R##_s = 0; \
608 R##_c = FP_CLS_INF; /* sqrt(+inf) = +inf */ \
609 } \
610 break; \
611 case FP_CLS_ZERO: \
612 R##_s = X##_s; \
613 R##_c = FP_CLS_ZERO; /* sqrt(+-0) = +-0 */ \
614 break; \
615 case FP_CLS_NORMAL: \
616 R##_s = 0; \
617 if (X##_s) \
618 { \
619 R##_c = FP_CLS_NAN; /* sNAN */ \
620 R##_s = _FP_NANSIGN_##fs; \
621 _FP_FRAC_SET_##wc(R, _FP_NANFRAC_##fs); \
622 FP_SET_EXCEPTION(FP_EX_INVALID); \
623 break; \
624 } \
625 R##_c = FP_CLS_NORMAL; \
626 if (X##_e &amp; 1) \
627 _FP_FRAC_SLL_##wc(X, 1); \
628 R##_e = X##_e >> 1; \
629 _FP_FRAC_SET_##wc(S, _FP_ZEROFRAC_##wc); \
630 _FP_FRAC_SET_##wc(R, _FP_ZEROFRAC_##wc); \
631 q = _FP_OVERFLOW_##fs >> 1; \
632 _FP_SQRT_MEAT_##wc(R, S, T, X, q); \
633 } \
634 } while (0)
635
636/*
637 * Convert from FP to integer
638 */
639
640/* RSIGNED can have following values:
641 * 0: the number is required to be 0..(2^rsize)-1, if not, NV is set plus
642 * the result is either 0 or (2^rsize)-1 depending on the sign in such case.
643 * 1: the number is required to be -(2^(rsize-1))..(2^(rsize-1))-1, if not, NV is
644 * set plus the result is either -(2^(rsize-1)) or (2^(rsize-1))-1 depending
645 * on the sign in such case.
646 * 2: the number is required to be -(2^(rsize-1))..(2^(rsize-1))-1, if not, NV is
647 * set plus the result is truncated to fit into destination.
648 * -1: the number is required to be -(2^(rsize-1))..(2^rsize)-1, if not, NV is
649 * set plus the result is either -(2^(rsize-1)) or (2^(rsize-1))-1 depending
650 * on the sign in such case.
651 */
652#define _FP_TO_INT(fs, wc, r, X, rsize, rsigned) \
653 do { \
654 switch (X##_c) \
655 { \
656 case FP_CLS_NORMAL: \
657 if (X##_e < 0) \
658 { \
659 FP_SET_EXCEPTION(FP_EX_INEXACT); \
660 case FP_CLS_ZERO: \
661 r = 0; \
662 } \
663 else if (X##_e >= rsize - (rsigned > 0 X##_s) \
664 (!rsigned &amp;&amp; X##_s)) \
665 { /* overflow */ \
666 case FP_CLS_NAN: \
667 case FP_CLS_INF: \
668 if (rsigned == 2) \
669 { \
670 if (X##_c != FP_CLS_NORMAL \
671 X##_e >= rsize - 1 + _FP_WFRACBITS_##fs) \
672 r = 0; \
673 else \
674 { \
675 _FP_FRAC_SLL_##wc(X, (X##_e - _FP_WFRACBITS_##fs + 1)); \
676 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
677 } \
678 } \
679 else if (rsigned) \
680 { \
681 r = 1; \
682 r <<= rsize - 1; \
683 r -= 1 - X##_s; \
684 } \
685 else \
686 { \
687 r = 0; \
688 if (X##_s) \
689 r = ~r; \
690 } \
691 FP_SET_EXCEPTION(FP_EX_INVALID); \
692 } \
693 else \
694 { \
695 if (_FP_W_TYPE_SIZE*wc < rsize) \
696 { \
697 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
698 r <<= X##_e - _FP_WFRACBITS_##fs; \
699 } \
700 else \
701 { \
702 if (X##_e >= _FP_WFRACBITS_##fs) \
703 _FP_FRAC_SLL_##wc(X, (X##_e - _FP_WFRACBITS_##fs + 1)); \
704 else if (X##_e < _FP_WFRACBITS_##fs - 1) \
705 { \
706 _FP_FRAC_SRS_##wc(X, (_FP_WFRACBITS_##fs - X##_e - 2), \
707 _FP_WFRACBITS_##fs); \
708 if (_FP_FRAC_LOW_##wc(X) &amp; 1) \
709 FP_SET_EXCEPTION(FP_EX_INEXACT); \
710 _FP_FRAC_SRL_##wc(X, 1); \
711 } \
712 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
713 } \
714 if (rsigned &amp;&amp; X##_s) \
715 r = -r; \
716 } \
717 break; \
718 } \
719 } while (0)

Срд 20 Фев 2013 05:35:07
721#define _FP_TO_INT_ROUND(fs, wc, r, X, rsize, rsigned) \
722 do { \
723 r = 0; \
724 switch (X##_c) \
725 { \
726 case FP_CLS_NORMAL: \
727 if (X##_e >= _FP_FRACBITS_##fs - 1) \
728 { \
729 if (X##_e < rsize - 1 + _FP_WFRACBITS_##fs) \
730 { \
731 if (X##_e >= _FP_WFRACBITS_##fs - 1) \
732 { \
733 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
734 r <<= X##_e - _FP_WFRACBITS_##fs + 1; \
735 } \
736 else \
737 { \
738 _FP_FRAC_SRL_##wc(X, _FP_WORKBITS - X##_e \
739 + _FP_FRACBITS_##fs - 1); \
740 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
741 } \
742 } \
743 } \
744 else \
745 { \
746 if (X##_e <= -_FP_WORKBITS - 1) \
747 _FP_FRAC_SET_##wc(X, _FP_MINFRAC_##wc); \
748 else \
749 _FP_FRAC_SRS_##wc(X, _FP_FRACBITS_##fs - 1 - X##_e, \
750 _FP_WFRACBITS_##fs); \
751 _FP_ROUND(wc, X); \
752 _FP_FRAC_SRL_##wc(X, _FP_WORKBITS); \
753 _FP_FRAC_ASSEMBLE_##wc(r, X, rsize); \
754 } \
755 if (rsigned &amp;&amp; X##_s) \
756 r = -r; \
757 if (X##_e >= rsize - (rsigned > 0 X##_s) \
758 (!rsigned &amp;&amp; X##_s)) \
759 { /* overflow */ \
760 case FP_CLS_NAN: \
761 case FP_CLS_INF: \
762 if (!rsigned) \
763 { \
764 r = 0; \
765 if (X##_s) \
766 r = ~r; \
767 } \
768 else if (rsigned != 2) \
769 { \
770 r = 1; \
771 r <<= rsize - 1; \
772 r -= 1 - X##_s; \
773 } \
774 FP_SET_EXCEPTION(FP_EX_INVALID); \
775 } \
776 break; \
777 case FP_CLS_ZERO: \
778 break; \
779 } \
780 } while (0)
781
782#define _FP_FROM_INT(fs, wc, X, r, rsize, rtype) \
783 do { \
784 if ^ \
785 { \
786 unsigned rtype ur_; \
787 X##_c = FP_CLS_NORMAL; \
788 \
789 if ((X##_s = (r < 0))) \
790 ur_ = (unsigned rtype) -r; \
791 else \
792 ur_ = (unsigned rtype) r; \
793 if (rsize <= _FP_W_TYPE_SIZE) \
794 __FP_CLZ(X##_e, ur_); \
795 else \
796 __FP_CLZ_2(X##_e, (_FP_W_TYPE)(ur_ >> _FP_W_TYPE_SIZE), \
797 (_FP_W_TYPE)ur_); \
798 if (rsize < _FP_W_TYPE_SIZE) \
799 X##_e -= (_FP_W_TYPE_SIZE - rsize); \
800 X##_e = rsize - X##_e - 1; \
801 \
802 if (_FP_FRACBITS_##fs < rsize &amp;&amp; _FP_WFRACBITS_##fs <= X##_e) \
803 __FP_FRAC_SRS_1(ur_, (X##_e - _FP_WFRACBITS_##fs + 1), rsize);\
804 _FP_FRAC_DISASSEMBLE_##wc(X, ur_, rsize); \
805 if ((_FP_WFRACBITS_##fs - X##_e - 1) > 0) \
806 _FP_FRAC_SLL_##wc(X, (_FP_WFRACBITS_##fs - X##_e - 1)); \
807 } \
808 else \
809 { \
810 X##_c = FP_CLS_ZERO, X##_s = 0; \
811 } \
812 } while (0)
813
814
815#define FP_CONV(dfs,sfs,dwc,swc,D,S) \
816 do { \
817 _FP_FRAC_CONV_##dwc##_##swc(dfs, sfs, D, S); \
818 D##_e = S##_e; \
819 D##_c = S##_c; \
820 D##_s = S##_s; \
821 } while (0)
822
823/*
824 * Helper primitives.
825 */
826
827/* Count leading zeros in a word. */
828
829#ifndef __FP_CLZ
830#if _FP_W_TYPE_SIZE < 64
831/* this is just to shut the compiler up about shifts > word length -- PMM 02/1998 */
832#define __FP_CLZ(r, x) \
833 do { \
834 _FP_W_TYPE _t = (x); \
835 r = _FP_W_TYPE_SIZE - 1; \
836 if (_t > 0xffff) r -= 16; \
837 if (_t > 0xffff) _t >>= 16; \
838 if (_t > 0xff) r -= 8; \
839 if (_t > 0xff) _t >>= 8; \
840 if (_t &amp; 0xf0) r -= 4; \
841 if (_t &amp; 0xf0) _t >>= 4; \
842 if (_t &amp; 0xc) r -= 2; \
843 if (_t &amp; 0xc) _t >>= 2; \
844 if (_t &amp; 0x2) r -= 1; \
845 } while (0)
846#else /* not _FP_W_TYPE_SIZE < 64 */
847#define __FP_CLZ(r, x) \
848 do { \
849 _FP_W_TYPE _t = (x); \
850 r = _FP_W_TYPE_SIZE - 1; \
851 if (_t > 0xffffffff) r -= 32; \
852 if (_t > 0xffffffff) _t >>= 32; \
853 if (_t > 0xffff) r -= 16; \
854 if (_t > 0xffff) _t >>= 16; \
855 if (_t > 0xff) r -= 8; \
856 if (_t > 0xff) _t >>= 8; \
857 if (_t &amp; 0xf0) r -= 4; \
858 if (_t &amp; 0xf0) _t >>= 4; \
859 if (_t &amp; 0xc) r -= 2; \
860 if (_t &amp; 0xc) _t >>= 2; \
861 if (_t &amp; 0x2) r -= 1; \
862 } while (0)
863#endif /* not _FP_W_TYPE_SIZE < 64 */
864#endif /* ndef __FP_CLZ */
865
866#define _FP_DIV_HELP_imm(q, r, n, d) \
867 do { \
868 q = n / d, r = n % d; \
869 } while (0)
870
871#endif /* MATH_EMU_OP_COMMON_H */
872

Срд 20 Фев 2013 05:36:24
>>43739285
> во время последнего полового акта.
В последний раз моя тянка сделала мне миньет, что безусловно свидетельствует о том, что вагинальным сексом мы не занимаемся вообще.

Срд 20 Фев 2013 05:36:55
1/* Software floating-point emulation.
2 Definitions for IEEE Quad Precision.
3 Copyright (C) 1997,1998,1999 Free Software Foundation, Inc.
4 This file is part of the GNU C Library.
5 Contributed by Richard Henderson (rth@cygnus.com),
6 Jakub Jelinek (jj@ultra.linux.cz),
7 David S. Miller (davem@redhat.com) and
8 Peter Maydell (pmaydell@chiark.greenend.org.uk).
9
10 The GNU C Library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Library General Public License as
12 published by the Free Software Foundation; either version 2 of the
13 License, or (at your option) any later version.
14
15 The GNU C Library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Library General Public License for more details.
19
20 You should have received a copy of the GNU Library General Public
21 License along with the GNU C Library; see the file COPYING.LIB. If
22 not, write to the Free Software Foundation, Inc.,
23 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */
24
25#ifndef MATH_EMU_QUAD_H
26#define MATH_EMU_QUAD_H
27
28#if _FP_W_TYPE_SIZE < 32
29#error "Here&amp;#39;s a nickel, kid. Go buy yourself a real computer."
30#endif
31
32#if _FP_W_TYPE_SIZE < 64
33#define _FP_FRACTBITS_Q (4*_FP_W_TYPE_SIZE)
34#else
35#define _FP_FRACTBITS_Q (2*_FP_W_TYPE_SIZE)
36#endif
37
38#define _FP_FRACBITS_Q 113
39#define _FP_FRACXBITS_Q (_FP_FRACTBITS_Q - _FP_FRACBITS_Q)
40#define _FP_WFRACBITS_Q (_FP_WORKBITS + _FP_FRACBITS_Q)
41#define _FP_WFRACXBITS_Q (_FP_FRACTBITS_Q - _FP_WFRACBITS_Q)
42#define _FP_EXPBITS_Q 15
43#define _FP_EXPBIAS_Q 16383
44#define _FP_EXPMAX_Q 32767
45
46#define _FP_QNANBIT_Q \
47 ((_FP_W_TYPE)1 << (_FP_FRACBITS_Q-2) % _FP_W_TYPE_SIZE)
48#define _FP_IMPLBIT_Q \
49 ((_FP_W_TYPE)1 << (_FP_FRACBITS_Q-1) % _FP_W_TYPE_SIZE)
50#define _FP_OVERFLOW_Q \
51 ((_FP_W_TYPE)1 << (_FP_WFRACBITS_Q % _FP_W_TYPE_SIZE))
52
53#if _FP_W_TYPE_SIZE < 64
54
55union _FP_UNION_Q
56{
57 long double flt;
58 struct
59 {
60#if __BYTE_ORDER == __BIG_ENDIAN
61 unsigned sign : 1;
62 unsigned exp : _FP_EXPBITS_Q;
63 unsigned long frac3 : _FP_FRACBITS_Q - (_FP_IMPLBIT_Q != 0)-(_FP_W_TYPE_SIZE * 3);
64 unsigned long frac2 : _FP_W_TYPE_SIZE;
65 unsigned long frac1 : _FP_W_TYPE_SIZE;
66 unsigned long frac0 : _FP_W_TYPE_SIZE;
67#else
68 unsigned long frac0 : _FP_W_TYPE_SIZE;
69 unsigned long frac1 : _FP_W_TYPE_SIZE;
70 unsigned long frac2 : _FP_W_TYPE_SIZE;
71 unsigned long frac3 : _FP_FRACBITS_Q - (_FP_IMPLBIT_Q != 0)-(_FP_W_TYPE_SIZE * 3);
72 unsigned exp : _FP_EXPBITS_Q;
73 unsigned sign : 1;
74#endif /* not bigendian */
75 } bits attribute((packed));
76};
77
78
79#define FP_DECL_Q(X) _FP_DECL(4,X)
80#define FP_UNPACK_RAW_Q(X,val) _FP_UNPACK_RAW_4(Q,X,val)
81#define FP_UNPACK_RAW_QP(X,val) _FP_UNPACK_RAW_4_P(Q,X,val)
82#define FP_PACK_RAW_Q(val,X) _FP_PACK_RAW_4(Q,val,X)
83#define FP_PACK_RAW_QP(val,X) \
84 do { \
85 if (!FP_INHIBIT_RESULTS) \
86 _FP_PACK_RAW_4_P(Q,val,X); \
87 } while (0)
88
89#define FP_UNPACK_Q(X,val) \
90 do { \
91 _FP_UNPACK_RAW_4(Q,X,val); \
92 _FP_UNPACK_CANONICAL(Q,4,X); \
93 } while (0)
94
95#define FP_UNPACK_QP(X,val) \
96 do { \
97 _FP_UNPACK_RAW_4_P(Q,X,val); \
98 _FP_UNPACK_CANONICAL(Q,4,X); \
99 } while (0)
100
101#define FP_PACK_Q(val,X) \
102 do { \
103 _FP_PACK_CANONICAL(Q,4,X); \
104 _FP_PACK_RAW_4(Q,val,X); \
105 } while (0)
106
107#define FP_PACK_QP(val,X) \
108 do { \
109 _FP_PACK_CANONICAL(Q,4,X); \
110 if (!FP_INHIBIT_RESULTS) \
111 _FP_PACK_RAW_4_P(Q,val,X); \
112 } while (0)
113
114#define FP_ISSIGNAN_Q(X) _FP_ISSIGNAN(Q,4,X)
115#define FP_NEG_Q(R,X) _FP_NEG(Q,4,R,X)
116#define FP_ADD_Q(R,X,Y) _FP_ADD(Q,4,R,X,Y)
117#define FP_SUB_Q(R,X,Y) _FP_SUB(Q,4,R,X,Y)
118#define FP_MUL_Q(R,X,Y) _FP_MUL(Q,4,R,X,Y)
119#define FP_DIV_Q(R,X,Y) _FP_DIV(Q,4,R,X,Y)
120#define FP_SQRT_Q(R,X) _FP_SQRT(Q,4,R,X)
121#define _FP_SQRT_MEAT_Q(R,S,T,X,Q) _FP_SQRT_MEAT_4(R,S,T,X,Q)
122
123#define FP_CMP_Q(r,X,Y,un) _FP_CMP(Q,4,r,X,Y,un)
124#define FP_CMP_EQ_Q(r,X,Y) _FP_CMP_EQ(Q,4,r,X,Y)
125
126#define FP_TO_INT_Q(r,X,rsz,rsg) _FP_TO_INT(Q,4,r,X,rsz,rsg)
127#define FP_TO_INT_ROUND_Q(r,X,rsz,rsg) _FP_TO_INT_ROUND(Q,4,r,X,rsz,rsg)
128#define FP_FROM_INT_Q(X,r,rs,rt) _FP_FROM_INT(Q,4,X,r,rs,rt)
129
130#define _FP_FRAC_HIGH_Q(X) _FP_FRAC_HIGH_4(X)
131#define _FP_FRAC_HIGH_RAW_Q(X) _FP_FRAC_HIGH_4(X)
132
133#else /* not _FP_W_TYPE_SIZE < 64 */
134union _FP_UNION_Q
135{
136 long double flt /* attribute((mode(TF))) */ ;
137 struct {
138#if __BYTE_ORDER == __BIG_ENDIAN
139 unsigned sign : 1;
140 unsigned exp : _FP_EXPBITS_Q;
141 unsigned long frac1 : _FP_FRACBITS_Q-(_FP_IMPLBIT_Q != 0)-_FP_W_TYPE_SIZE;
142 unsigned long frac0 : _FP_W_TYPE_SIZE;
143#else
144 unsigned long frac0 : _FP_W_TYPE_SIZE;
145 unsigned long frac1 : _FP_FRACBITS_Q-(_FP_IMPLBIT_Q != 0)-_FP_W_TYPE_SIZE;
146 unsigned exp : _FP_EXPBITS_Q;
147 unsigned sign : 1;
148#endif
149 } bits;
150};
151
152#define FP_DECL_Q(X) _FP_DECL(2,X)
153#define FP_UNPACK_RAW_Q(X,val) _FP_UNPACK_RAW_2(Q,X,val)
154#define FP_UNPACK_RAW_QP(X,val) _FP_UNPACK_RAW_2_P(Q,X,val)
155#define FP_PACK_RAW_Q(val,X) _FP_PACK_RAW_2(Q,val,X)
156#define FP_PACK_RAW_QP(val,X) \
157 do { \
158 if (!FP_INHIBIT_RESULTS) \
159 _FP_PACK_RAW_2_P(Q,val,X); \
160 } while (0)
161
162#define FP_UNPACK_Q(X,val) \
163 do { \
164 _FP_UNPACK_RAW_2(Q,X,val); \
165 _FP_UNPACK_CANONICAL(Q,2,X); \
166 } while (0)
167
168#define FP_UNPACK_QP(X,val) \
169 do { \
170 _FP_UNPACK_RAW_2_P(Q,X,val); \
171 _FP_UNPACK_CANONICAL(Q,2,X); \
172 } while (0)
173
174#define FP_PACK_Q(val,X) \
175 do { \
176 _FP_PACK_CANONICAL(Q,2,X); \
177 _FP_PACK_RAW_2(Q,val,X); \
178 } while (0)
179
180#define FP_PACK_QP(val,X) \
181 do { \
182 _FP_PACK_CANONICAL(Q,2,X); \
183 if (!FP_INHIBIT_RESULTS) \
184 _FP_PACK_RAW_2_P(Q,val,X); \
185 } while (0)
186
187#define FP_ISSIGNAN_Q(X) _FP_ISSIGNAN(Q,2,X)
188#define FP_NEG_Q(R,X) _FP_NEG(Q,2,R,X)
189#define FP_ADD_Q(R,X,Y) _FP_ADD(Q,2,R,X,Y)
190#define FP_SUB_Q(R,X,Y) _FP_SUB(Q,2,R,X,Y)
191#define FP_MUL_Q(R,X,Y) _FP_MUL(Q,2,R,X,Y)
192#define FP_DIV_Q(R,X,Y) _FP_DIV(Q,2,R,X,Y)
193#define FP_SQRT_Q(R,X) _FP_SQRT(Q,2,R,X)
194#define _FP_SQRT_MEAT_Q(R,S,T,X,Q) _FP_SQRT_MEAT_2(R,S,T,X,Q)
195
196#define FP_CMP_Q(r,X,Y,un) _FP_CMP(Q,2,r,X,Y,un)
197#define FP_CMP_EQ_Q(r,X,Y) _FP_CMP_EQ(Q,2,r,X,Y)
198
199#define FP_TO_INT_Q(r,X,rsz,rsg) _FP_TO_INT(Q,2,r,X,rsz,rsg)
200#define FP_TO_INT_ROUND_Q(r,X,rsz,rsg) _FP_TO_INT_ROUND(Q,2,r,X,rsz,rsg)
201#define FP_FROM_INT_Q(X,r,rs,rt) _FP_FROM_INT(Q,2,X,r,rs,rt)
202
203#define _FP_FRAC_HIGH_Q(X) _FP_FRAC_HIGH_2(X)
204#define _FP_FRAC_HIGH_RAW_Q(X) _FP_FRAC_HIGH_2(X)
205
206#endif /* not _FP_W_TYPE_SIZE < 64 */
207
208#endif /* MATH_EMU_QUAD_H */
209

Срд 20 Фев 2013 05:38:07
>>43739311
>не могущие в банальную анатомию.
Давно медшарагу закончил, доктор Хата?

Срд 20 Фев 2013 05:38:11
1/* Software floating-point emulation.
2 Copyright (C) 1997,1998,1999 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Richard Henderson (rth@cygnus.com),
5 Jakub Jelinek (jj@ultra.linux.cz),
6 David S. Miller (davem@redhat.com) and
7 Peter Maydell (pmaydell@chiark.greenend.org.uk).
8
9 The GNU C Library is free software; you can redistribute it and/or
10 modify it under the terms of the GNU Library General Public License as
11 published by the Free Software Foundation; either version 2 of the
12 License, or (at your option) any later version.
13
14 The GNU C Library is distributed in the hope that it will be useful,
15 but WITHOUT ANY WARRANTY; without even the implied warranty of
16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 Library General Public License for more details.
18
19 You should have received a copy of the GNU Library General Public
20 License along with the GNU C Library; see the file COPYING.LIB. If
21 not, write to the Free Software Foundation, Inc.,
22 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */
23
24#ifndef MATH_EMU_SOFT_FP_H
25#define MATH_EMU_SOFT_FP_H
26
27#include >>43739158
Вообще не понимаю как может быть приятно когда мужик тебе сосет. Это просто сукапздц противно. Давно заметил, что возбуждаюсь больше на внешность тян, чем на сам процесс, когда сексом занимаемся.

Срд 20 Фев 2013 05:42:14
XR linux/include/crypto/algapi.h

Search Prefs
1/*
2 * Cryptographic API for algorithms (i.e., low-level API).
3 *
4 * Copyright Y 2006 Herbert Xu <herbert@gondor.apana.org.au>
5 *
6 * This program is free software; you can redistribute it and/or modify it
7 * under the terms of the GNU General Public License as published by the Free
8 * Software Foundation; either version 2 of the License, or (at your option)
9 * any later version.
10 *
11 */
12#ifndef _CRYPTO_ALGAPI_H
13#define _CRYPTO_ALGAPI_H
14
15#include <linux/crypto.h>
16#include <linux/list.h>
17#include <linux/kernel.h>
18#include <linux/skbuff.h>
19
20struct module;
21struct rtattr;
22struct seq_file;
23
24struct crypto_type {
25 unsigned int (*ctxsize)(struct crypto_alg *alg, u32 type, u32 mask);
26 unsigned int (*extsize)(struct crypto_alg *alg);
27 int (*init)(struct crypto_tfm *tfm, u32 type, u32 mask);
28 int (*init_tfm)(struct crypto_tfm *tfm);
29 void (*show)(struct seq_file *m, struct crypto_alg *alg);
30 int (*report)(struct sk_buff *skb, struct crypto_alg *alg);
31 struct crypto_alg *(*lookup)(const char *name, u32 type, u32 mask);
32
33 unsigned int type;
34 unsigned int maskclear;
35 unsigned int maskset;
36 unsigned int tfmsize;
37};
38
39struct crypto_instance {
40 struct crypto_alg alg;
41
42 struct crypto_template *tmpl;
43 struct hlist_node list;
44
45 void *__ctx[] CRYPTO_MINALIGN_ATTR;
46};
47
48struct crypto_template {
49 struct list_head list;
50 struct hlist_head instances;
51 struct module *module;
52
53 struct crypto_instance *(*alloc)(struct rtattr **tb);
54 void (*free)(struct crypto_instance *inst);
55 int (*create)(struct crypto_template *tmpl, struct rtattr **tb);
56
57 char name[CRYPTO_MAX_ALG_NAME];
58};
59
60struct crypto_spawn {
61 struct list_head list;
62 struct crypto_alg *alg;
63 struct crypto_instance *inst;
64 const struct crypto_type *frontend;
65 u32 mask;
66};
67
68struct crypto_queue {
69 struct list_head list;
70 struct list_head *backlog;
71
72 unsigned int qlen;
73 unsigned int max_qlen;
74};
75
76struct scatter_walk {
77 struct scatterlist *sg;
78 unsigned int offset;
79};
80
81struct blkcipher_walk {
82 union {
83 struct {
84 struct page *page;
85 unsigned long offset;
86 } phys;
87
88 struct {
89 u8 *page;
90 u8 *addr;
91 } virt;
92 } src, dst;
93
94 struct scatter_walk in;
95 unsigned int nbytes;
96
97 struct scatter_walk out;
98 unsigned int total;
99
100 void *page;
101 u8 *buffer;
102 u8 *iv;
103
104 int flags;
105 unsigned int blocksize;
106};
107
108struct ablkcipher_walk {
109 struct {
110 struct page *page;
111 unsigned int offset;
112 } src, dst;
113
114 struct scatter_walk in;
115 unsigned int nbytes;
116 struct scatter_walk out;
117 unsigned int total;
118 struct list_head buffers;
119 u8 *iv_buffer;
120 u8 *iv;
121 int flags;
122 unsigned int blocksize;
123};
124
125extern const struct crypto_type crypto_ablkcipher_type;
126extern const struct crypto_type crypto_aead_type;
127extern const struct crypto_type crypto_blkcipher_type;
128
129void crypto_mod_put(struct crypto_alg *alg);
130
131int crypto_register_template(struct crypto_template *tmpl);
132void crypto_unregister_template(struct crypto_template *tmpl);
133struct crypto_template *crypto_lookup_template(const char *name);
134
135int crypto_register_instance(struct crypto_template *tmpl,
136 struct crypto_instance *inst);
137int crypto_unregister_instance(struct crypto_alg *alg);
138
139int crypto_init_spawn(struct crypto_spawn *spawn, struct crypto_alg *alg,
140 struct crypto_instance *inst, u32 mask);
141int crypto_init_spawn2(struct crypto_spawn *spawn, struct crypto_alg *alg,
142 struct crypto_instance *inst,
143 const struct crypto_type *frontend);
144
145void crypto_drop_spawn(struct crypto_spawn *spawn);
146struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type,
147 u32 mask);
148void *crypto_spawn_tfm2(struct crypto_spawn *spawn);
149
150static inline void crypto_set_spawn(struct crypto_spawn *spawn,
151 struct crypto_instance *inst)
152{
153 spawn->inst = inst;
154}
155
156struct crypto_attr_type *crypto_get_attr_type(struct rtattr **tb);
157int crypto_check_attr_type(struct rtattr **tb, u32 type);
158const char *crypto_attr_alg_name(struct rtattr *rta);
159struct crypto_alg *crypto_attr_alg2(struct rtattr *rta,
160 const struct crypto_type *frontend,
161 u32 type, u32 mask);
162
163static inline struct crypto_alg *crypto_attr_alg(struct rtattr *rta,
164 u32 type, u32 mask)
165{
166 return crypto_attr_alg2(rta, NULL, type, mask);
167}
168
169int crypto_attr_u32(struct rtattr *rta, u32 *num);
170void *crypto_alloc_instance2(const char *name, struct crypto_alg *alg,
171 unsigned int head);
172struct crypto_instance *crypto_alloc_instance(const char *name,
173 struct crypto_alg *alg);
174
175void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen);
176int crypto_enqueue_request(struct crypto_queue *queue,
177 struct crypto_async_request *request);
178void *__crypto_dequeue_request(struct crypto_queue *queue, unsigned int offset);
179struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue);
180int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm);
181
182/* These functions require the input/output to be aligned as u32. */
183void crypto_inc(u8 *a, unsigned int size);
184void crypto_xor(u8 *dst, const u8 *src, unsigned int size);
185
186int blkcipher_walk_done(struct blkcipher_desc *desc,
187 struct blkcipher_walk *walk, int err);
188int blkcipher_walk_virt(struct blkcipher_desc *desc,
189 struct blkcipher_walk *walk);
190int blkcipher_walk_phys(struct blkcipher_desc *desc,
191 struct blkcipher_walk *walk);
192int blkcipher_walk_virt_block(struct blkcipher_desc *desc,
193 struct blkcipher_walk *walk,
194 unsigned int blocksize);
195
196int ablkcipher_walk_done(struct ablkcipher_request *req,
197 struct ablkcipher_walk *walk, int err);
198int ablkcipher_walk_phys(struct ablkcipher_request *req,
199 struct ablkcipher_walk *walk);
200void __ablkcipher_walk_complete(struct ablkcipher_walk *walk);
201
202static inline void *crypto_tfm_ctx_aligned(struct crypto_tfm *tfm)
203{
204 return PTR_ALIGN(crypto_tfm_ctx(tfm),
205 crypto_tfm_alg_alignmask(tfm) + 1);
206}
207
208static inline struct crypto_instance *crypto_tfm_alg_instance(
209 struct crypto_tfm *tfm)
210{
211 return container_of(tfm->__crt_alg, struct crypto_instance, alg);
212}
213
214static inline void *crypto_instance_ctx(struct crypto_instance *inst)
215{
216 return inst->__ctx;
217}
218
219static inline struct ablkcipher_alg *crypto_ablkcipher_alg(
220 struct crypto_ablkcipher *tfm)
221{
222 return &amp;crypto_ablkcipher_tfm(tfm)->__crt_alg->cra_ablkcipher;
223}
224
225static inline void *crypto_ablkcipher_ctx(struct crypto_ablkcipher *tfm)
226{
227 return crypto_tfm_ctx(&amp;tfm->base);
228}
229
230static inline void *crypto_ablkcipher_ctx_aligned(struct crypto_ablkcipher *tfm)
231{
232 return crypto_tfm_ctx_aligned(&amp;tfm->base);
233}
234
235static inline struct aead_alg *crypto_aead_alg(struct crypto_aead *tfm)
236{
237 return &amp;crypto_aead_tfm(tfm)->__crt_alg->cra_aead;
238}
239
240static inline void *crypto_aead_ctx(struct crypto_aead *tfm)
241{
242 return crypto_tfm_ctx(&amp;tfm->base);
243}
244
245static inline struct crypto_instance *crypto_aead_alg_instance(
246 struct crypto_aead *aead)
247{
248 return crypto_tfm_alg_instance(&amp;aead->base);
249}
250
251static inline struct crypto_blkcipher *crypto_spawn_blkcipher(
252 struct crypto_spawn *spawn)
253{
254 u32 type = CRYPTO_ALG_TYPE_BLKCIPHER;
255 u32 mask = CRYPTO_ALG_TYPE_MASK;
256
257 return __crypto_blkcipher_cast(crypto_spawn_tfm(spawn, type, mask));
258}
259
260static inline void *crypto_blkcipher_ctx(struct crypto_blkcipher *tfm)
261{
262 return crypto_tfm_ctx(&amp;tfm->base);
263}
264
265static inline void *crypto_blkcipher_ctx_aligned(struct crypto_blkcipher *tfm)
266{
267 return crypto_tfm_ctx_aligned(&amp;tfm->base);
268}
269
270static inline struct crypto_cipher *crypto_spawn_cipher(
271 struct crypto_spawn *spawn)
272{
273 u32 type = CRYPTO_ALG_TYPE_CIPHER;
274 u32 mask = CRYPTO_ALG_TYPE_MASK;
275
276 return __crypto_cipher_cast(crypto_spawn_tfm(spawn, type, mask));
277}
278
279static inline struct cipher_alg *crypto_cipher_alg(struct crypto_cipher *tfm)
280{
281 return &amp;crypto_cipher_tfm(tfm)->__crt_alg->cra_cipher;
282}
283
284static inline struct crypto_hash *crypto_spawn_hash(struct crypto_spawn *spawn)
285{
286 u32 type = CRYPTO_ALG_TYPE_HASH;
287 u32 mask = CRYPTO_ALG_TYPE_HASH_MASK;
288
289 return __crypto_hash_cast(crypto_spawn_tfm(spawn, type, mask));
290}
291
292static inline void *crypto_hash_ctx(struct crypto_hash *tfm)
293{
294 return crypto_tfm_ctx(&amp;tfm->base);
295}
296
297static inline void *crypto_hash_ctx_aligned(struct crypto_hash *tfm)
298{
299 return crypto_tfm_ctx_aligned(&amp;tfm->base);
300}
301
302static inline void blkcipher_walk_init(struct blkcipher_walk *walk,
303 struct scatterlist *dst,
304 struct scatterlist *src,
305 unsigned int nbytes)
306{
307 walk->in.sg = src;
308 walk->out.sg = dst;
309 walk->total = nbytes;
310}
311
312static inline void ablkcipher_walk_init(struct ablkcipher_walk *walk,
313 struct scatterlist *dst,
314 struct scatterlist *src,
315 unsigned int nbytes)
316{
317 walk->in.sg = src;
318 walk->out.sg = dst;
319 walk->total = nbytes;
320 INIT_LIST_HEAD(&amp;walk->buffers);
321}
322
323static inline void ablkcipher_walk_complete(struct ablkcipher_walk *walk)
324{
325 if (unlikely(!list_empty(&amp;walk->buffers)))
326 __ablkcipher_walk_complete(walk);
327}
328
329static inline struct crypto_async_request *crypto_get_backlog(
330 struct crypto_queue *queue)
331{
332 return queue->backlog == &amp;queue->list ? NULL :
333 container_of(queue->backlog, struct crypto_async_request, list);
334}
335
336static inline int ablkcipher_enqueue_request(struct crypto_queue *queue,
337 struct ablkcipher_request *request)
338{
339 return crypto_enqueue_request(queue, &amp;request->base);
340}
341
342static inline struct ablkcipher_request *ablkcipher_dequeue_request(
343 struct crypto_queue *queue)
344{
345 return ablkcipher_request_cast(crypto_dequeue_request(queue));
346}
347
348static inline void *ablkcipher_request_ctx(struct ablkcipher_request *req)
349{
350 return req->__ctx;
351}
352
353static inline int ablkcipher_tfm_in_queue(struct crypto_queue *queue,
354 struct crypto_ablkcipher *tfm)
355{
356 return crypto_tfm_in_queue(queue, crypto_ablkcipher_tfm(tfm));
357}
358
359static inline void *aead_request_ctx(struct aead_request *req)
360{
361 return req->__ctx;
362}
363
364static inline void aead_request_complete(struct aead_request *req, int err)
365{
366 req->base.complete(&amp;req->base, err);
367}
368
369static inline u32 aead_request_flags(struct aead_request *req)
370{
371 return req->base.flags;
372}
373
374static inline struct crypto_alg *crypto_get_attr_alg(struct rtattr **tb,
375 u32 type, u32 mask)
376{
377 return crypto_attr_alg(tb[1], type, mask);
378}
379
380/*
381 * Returns CRYPTO_ALG_ASYNC if type/mask requires the use of sync algorithms.
382 * Otherwise returns zero.
383 */
384static inline int crypto_requires_sync(u32 type, u32 mask)
385{
386 return (type ^ CRYPTO_ALG_ASYNC) &amp; mask &amp; CRYPTO_ALG_ASYNC;
387}
388
389#endif /* _CRYPTO_ALGAPI_H */
390

Срд 20 Фев 2013 05:44:03
>>43739397
>205#endif /* MATH_EMU_DOUBLE_H */
Сука я ржал как упоротый.

Срд 20 Фев 2013 05:45:39
>>43739465
Над чем же?

Срд 20 Фев 2013 05:46:49
>>43738676
>Я во время действия просто ревела от счастья. Натурально слезы капали. Казался таким совсем родным и любимым. Сносило голову от доверия. Что я с ним такое делаю.
>(слава богу он не видел)
Что-то в этом ебаном мире не так.
А вообще, добра вам, если ты не зелёный Шрек, а я не говорящий осёл.

Срд 20 Фев 2013 05:47:21
LXR linux/include/xen/xenbus.h

Search Prefs
1/******************************************************************************
2 * xenbus.h
3 *
4 * Talks to Xen Store to figure out what devices we have.
5 *
6 * Copyright (C) 2005 Rusty Russell, IBM Corporation
7 * Copyright (C) 2005 XenSource Ltd.
8 *
9 * This program is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU General Public License version 2
11 * as published by the Free Software Foundation; or, when distributed
12 * separately from the Linux kernel or incorporated into other
13 * software packages, subject to the following license:
14 *
15 * Permission is hereby granted, free of charge, to any person obtaining a copy
16 * of this source file (the "Software"), to deal in the Software without
17 * restriction, including without limitation the rights to use, copy, modify,
18 * merge, publish, distribute, sublicense, and/or sell copies of the Software,
19 * and to permit persons to whom the Software is furnished to do so, subject to
20 * the following conditions:
21 *
22 * The above copyright notice and this permission notice shall be included in
23 * all copies or substantial portions of the Software.
24 *
25 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
26 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
27 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
28 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
29 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
30 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
31 * IN THE SOFTWARE.
32 */
33
34#ifndef _XEN_XENBUS_H
35#define _XEN_XENBUS_H
36
37#include <linux/device.h>
38#include <linux/notifier.h>
39#include <linux/mutex.h>
40#include <linux/export.h>
41#include <linux/completion.h>
42#include <linux/init.h>
43#include <linux/slab.h>
44#include <xen/interface/xen.h>
45#include <xen/interface/grant_table.h>
46#include <xen/interface/io/xenbus.h>
47#include <xen/interface/io/xs_wire.h>
48
49/* Register callback to watch this node. */
50struct xenbus_watch
51{
52 struct list_head list;
53
54 /* Path being watched. */
55 const char *node;
56
57 /* Callback (executed in a process context with no locks held). */
58 void (*callback)(struct xenbus_watch *,
59 const char **vec, unsigned int len);
60};
61
62
63/* A xenbus device. */
64struct xenbus_device {
65 const char *devicetype;
66 const char *nodename;
67 const char *otherend;
68 int otherend_id;
69 struct xenbus_watch otherend_watch;
70 struct device dev;
71 enum xenbus_state state;
72 struct completion down;
73};
74
75static inline struct xenbus_device *to_xenbus_device(struct device *dev)
76{
77 return container_of(dev, struct xenbus_device, dev);
78}
79
80struct xenbus_device_id
81{
82 /* .../device/<device_type>/<identifier> */
83 char devicetype[32]; /* General class of device. */
84};
85
86/* A xenbus driver. */
87struct xenbus_driver {
88 const struct xenbus_device_id *ids;
89 int (*probe)(struct xenbus_device *dev,
90 const struct xenbus_device_id *id);
91 void (*otherend_changed)(struct xenbus_device *dev,
92 enum xenbus_state backend_state);
93 int (*remove)(struct xenbus_device *dev);
94 int (*suspend)(struct xenbus_device *dev);
95 int (*resume)(struct xenbus_device *dev);
96 int (*uevent)(struct xenbus_device *, struct kobj_uevent_env *);
97 struct device_driver driver;
98 int (*read_otherend_details)(struct xenbus_device *dev);
99 int (*is_ready)(struct xenbus_device *dev);
100};
101
102#define DEFINE_XENBUS_DRIVER(var, drvname, methods...) \
103struct xenbus_driver var ## _driver = { \
104 .driver.name = drvname + 0 ?: var ## _ids->devicetype, \
105 .driver.owner = THIS_MODULE, \
106 .ids = var ## _ids, ## methods \
107}
108
109static inline struct xenbus_driver *to_xenbus_driver(struct device_driver *drv)
110{
111 return container_of(drv, struct xenbus_driver, driver);
112}
113
114int __must_check xenbus_register_frontend(struct xenbus_driver *);
115int __must_check xenbus_register_backend(struct xenbus_driver *);
116
117void xenbus_unregister_driver(struct xenbus_driver *drv);
118
119struct xenbus_transaction
120{
121 u32 id;
122};
123
124/* Nil transaction ID. */
125#define XBT_NIL ((struct xenbus_transaction) { 0 })
126
127char **xenbus_directory(struct xenbus_transaction t,
128 const char *dir, const char *node, unsigned int *num);
129void *xenbus_read(struct xenbus_transaction t,
130 const char *dir, const char *node, unsigned int *len);
131int xenbus_write(struct xenbus_transaction t,
132 const char *dir, const char *node, const char *string);
133int xenbus_mkdir(struct xenbus_transaction t,
134 const char *dir, const char *node);
135int xenbus_exists(struct xenbus_transaction t,
136 const char *dir, const char *node);
137int xenbus_rm(struct xenbus_transaction t, const char *dir, const char *node);
138int xenbus_transaction_start(struct xenbus_transaction *t);
139int xenbus_transaction_end(struct xenbus_transaction t, int abort);
140
141/* Single read and scanf: returns -errno or num scanned if > 0. */
142__scanf(4, 5)
143int xenbus_scanf(struct xenbus_transaction t,
144 const char *dir, const char *node, const char *fmt, ...);
145
146/* Single printf and write: returns -errno or 0. */
147__printf(4, 5)
148int xenbus_printf(struct xenbus_transaction t,
149 const char *dir, const char *node, const char *fmt, ...);
150
151/* Generic read function: NULL-terminated triples of name,
152 * sprintf-style type string, and pointer. Returns 0 or errno.*/
153int xenbus_gather(struct xenbus_transaction t, const char *dir, ...);
154
155/* notifer routines for when the xenstore comes up */
156extern int xenstored_ready;
157int register_xenstore_notifier(struct notifier_block *nb);
158void unregister_xenstore_notifier(struct notifier_block *nb);
159
160int register_xenbus_watch(struct xenbus_watch *watch);
161void unregister_xenbus_watch(struct xenbus_watch *watch);
162void xs_suspend(void);
163void xs_resume(void);
164void xs_suspend_cancel(void);
165
166/* Used by xenbus_dev to borrow kernel&amp;#39;s store connection. */
167void *xenbus_dev_request_and_reply(struct xsd_sockmsg *msg);
168
169struct work_struct;
170
171/* Prepare for domain suspend: then resume or cancel the suspend. */
172void xenbus_suspend(void);
173void xenbus_resume(void);
174void xenbus_probe(struct work_struct *);
175void xenbus_suspend_cancel(void);
176
177#define XENBUS_IS_ERR_READ(str) ({ \
178 if (!IS_ERR(str) &amp;&amp; strlen(str) == 0) { \
179 kfree(str); \
180 str = ERR_PTR(-ERANGE); \
181 } \
182 IS_ERR(str); \
183})
184
185#define XENBUS_EXIST_ERR(err) ((err) == -ENOENT (err) == -ERANGE)
186
187int xenbus_watch_path(struct xenbus_device *dev, const char *path,
188 struct xenbus_watch *watch,
189 void (*callback)(struct xenbus_watch *,
190 const char **, unsigned int));
191__printf(4, 5)
192int xenbus_watch_pathfmt(struct xenbus_device *dev, struct xenbus_watch *watch,
193 void (*callback)(struct xenbus_watch *,
194 const char **, unsigned int),
195 const char *pathfmt, ...);
196
197int xenbus_switch_state(struct xenbus_device *dev, enum xenbus_state new_state);
198int xenbus_grant_ring(struct xenbus_device *dev, unsigned long ring_mfn);
199int xenbus_map_ring_valloc(struct xenbus_device *dev,
200 int gnt_ref, void **vaddr);
201int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
202 grant_handle_t *handle, void *vaddr);
203
204int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr);
205int xenbus_unmap_ring(struct xenbus_device *dev,
206 grant_handle_t handle, void *vaddr);
207
208int xenbus_alloc_evtchn(struct xenbus_device *dev, int *port);
209int xenbus_bind_evtchn(struct xenbus_device *dev, int remote_port, int *port);
210int xenbus_free_evtchn(struct xenbus_device *dev, int port);
211
212enum xenbus_state xenbus_read_driver_state(const char *path);
213
214__printf(3, 4)
215void xenbus_dev_error(struct xenbus_device *dev, int err, const char *fmt, ...);
216__printf(3, 4)
217void xenbus_dev_fatal(struct xenbus_device *dev, int err, const char *fmt, ...);
218
219const char *xenbus_strstate(enum xenbus_state state);
220int xenbus_dev_is_online(struct xenbus_device *dev);
221int xenbus_frontend_closed(struct xenbus_device *dev);
222
223#endif /* _XEN_XENBUS_H */
224

Срд 20 Фев 2013 05:49:02
LXR linux/arch/x86/Kconfig.debug

Search Prefs
1menu "Kernel hacking"
2
3config TRACE_IRQFLAGS_SUPPORT
4 def_bool y
5
6source "lib/Kconfig.debug"
7
8config STRICT_DEVMEM
9 bool "Filter access to /dev/mem"
10 ---help---
11 If this option is disabled, you allow userspace (root) access to all
12 of memory, including kernel and userspace memory. Accidental
13 access to this is obviously disastrous, but specific access can
14 be used by people debugging the kernel. Note that with PAT support
15 enabled, even in this case there are restrictions on /dev/mem
16 use due to the cache aliasing requirements.
17
18 If this option is switched on, the /dev/mem file only allows
19 userspace access to PCI space and the BIOS code and data regions.
20 This is sufficient for dosemu and X and all common users of
21 /dev/mem.
22
23 If in doubt, say Y.
24
25config X86_VERBOSE_BOOTUP
26 bool "Enable verbose x86 bootup info messages"
27 default y
28 ---help---
29 Enables the informational output from the decompression stage
30 (e.g. bzImage) of the boot. If you disable this you will still
31 see errors. Disable this if you want silent bootup.
32
33config EARLY_PRINTK
34 bool "Early printk" if EXPERT
35 default y
36 ---help---
37 Write kernel log output directly into the VGA buffer or to a serial
38 port.
39
40 This is useful for kernel debugging when your machine crashes very
41 early before the console code is initialized. For normal operation
42 it is not recommended because it looks ugly and doesn&amp;#39;t cooperate
43 with klogd/syslogd or the X server. You should normally N here,
44 unless you want to debug such a crash.
45
46config EARLY_PRINTK_INTEL_MID
47 bool "Early printk for Intel MID platform support"
48 depends on EARLY_PRINTK &amp;&amp; X86_INTEL_MID
49
50config EARLY_PRINTK_DBGP
51 bool "Early printk via EHCI debug port"
52 depends on EARLY_PRINTK &amp;&amp; PCI
53 ---help---
54 Write kernel log output directly into the EHCI debug port.
55
56 This is useful for kernel debugging when your machine crashes very
57 early before the console code is initialized. For normal operation
58 it is not recommended because it looks ugly and doesn&amp;#39;t cooperate
59 with klogd/syslogd or the X server. You should normally N here,
60 unless you want to debug such a crash. You need usb debug device.
61
62config DEBUG_STACKOVERFLOW
63 bool "Check for stack overflows"
64 depends on DEBUG_KERNEL
65 ---help---
66 Say Y here if you want to check the overflows of kernel, IRQ
67 and exception stacks. This option will cause messages of the
68 stacks in detail when free stack space drops below a certain
69 limit.
70 If in doubt, say "N".
71
72config X86_PTDUMP
73 bool "Export kernel pagetable layout to userspace via debugfs"
74 depends on DEBUG_KERNEL
75 select DEBUG_FS
76 ---help---
77 Say Y here if you want to show the kernel pagetable layout in a
78 debugfs file. This information is only useful for kernel developers
79 who are working in architecture specific areas of the kernel.
80 It is probably not a good idea to enable this feature in a production
81 kernel.
82 If in doubt, say "N"
83
84config DEBUG_RODATA
85 bool "Write protect kernel read-only data structures"
86 default y
87 depends on DEBUG_KERNEL
88 ---help---
89 Mark the kernel read-only data as write-protected in the pagetables,
90 in order to catch accidental (and incorrect) writes to such const
91 data. This is recommended so that we can catch kernel bugs sooner.
92 If in doubt, say "Y".
93
94config DEBUG_RODATA_TEST
95 bool "Testcase for the DEBUG_RODATA feature"
96 depends on DEBUG_RODATA
97 default y
98 ---help---
99 This option enables a testcase for the DEBUG_RODATA
100 feature as well as for the change_page_attr() infrastructure.
101 If in doubt, say "N"
102
103config DEBUG_SET_MODULE_RONX
104 bool "Set loadable kernel module data as NX and text as RO"
105 depends on MODULES
106 ---help---
107 This option helps catch unintended modifications to loadable
108 kernel module&amp;#39;s text and read-only data. It also prevents execution
109 of module data. Such protection may interfere with run-time code
110 patching and dynamic kernel tracing - and they might also protect
111 against certain classes of kernel exploits.
112 If in doubt, say "N".
113
114config DEBUG_NX_TEST
115 tristate "Testcase for the NX non-executable stack feature"
116 depends on DEBUG_KERNEL &amp;&amp; m
117 ---help---
118 This option enables a testcase for the CPU NX capability
119 and the software setup of this feature.
120 If in doubt, say "N"
121
122config DOUBLEFAULT
123 default y
124 bool "Enable doublefault exception handler" if EXPERT
125 depends on X86_32
126 ---help---
127 This option allows trapping of rare doublefault exceptions that
128 would otherwise cause a system to silently reboot. Disabling this
129 option saves about 4k and might cause you much additional grey
130 hair.
131
132config DEBUG_TLBFLUSH
133 bool "Set upper limit of TLB entries to flush one-by-one"
134 depends on DEBUG_KERNEL &amp;&amp; (X86_64 X86_INVLPG)
135 ---help---
136
137 X86-only for now.
138
139 This option allows the user to tune the amount of TLB entries the
140 kernel flushes one-by-one instead of doing a full TLB flush. In
141 certain situations, the former is cheaper. This is controlled by the
142 tlb_flushall_shift knob under /sys/kernel/debug/x86. If you set it
143 to -1, the code flushes the whole TLB unconditionally. Otherwise,
144 for positive values of it, the kernel will use single TLB entry
145 invalidating instructions according to the following formula:
146
147 flush_entries <= active_tlb_entries / 2^tlb_flushall_shift
148
149 If in doubt, say "N".
150
151config IOMMU_DEBUG
152 bool "Enable IOMMU debugging"
153 depends on GART_IOMMU &amp;&amp; DEBUG_KERNEL
154 depends on X86_64
155 ---help---
156 Force the IOMMU to on even when you have less than 4GB of
157 memory and add debugging code. On overflow always panic. And
158 allow to enable IOMMU leak tracing. Can be disabled at boot
159 time with iommu=noforce. This will also enable scatter gather
160 list merging. Currently not recommended for production
161 code. When you use it make sure you have a big enough
162 IOMMU/AGP aperture. Most of the options enabled by this can
163 be set more finegrained using the iommu= command line
164 options. See Documentation/x86/x86_64/boot-options.txt for more
165 details.
166
167config IOMMU_STRESS
168 bool "Enable IOMMU stress-test mode"
169 ---help---
170 This option disables various optimizations in IOMMU related
171 code to do real stress testing of the IOMMU code. This option
172 will cause a performance drop and should only be enabled for
173 testing.
174
175config IOMMU_LEAK
176 bool "IOMMU leak tracing"
177 depends on IOMMU_DEBUG &amp;&amp; DMA_API_DEBUG
178 ---help---
179 Add a simple leak tracer to the IOMMU code. This is useful when you
180 are debugging a buggy device driver that leaks IOMMU mappings.
181
182config HAVE_MMIOTRACE_SUPPORT
183 def_bool y
184
185config X86_DECODER_SELFTEST
186 bool "x86 instruction decoder selftest"
187 depends on DEBUG_KERNEL &amp;&amp; KPROBES
188 ---help---
189 Perform x86 instruction decoder selftests at build time.
190 This option is useful for checking the sanity of x86 instruction
191 decoder code.
192 If unsure, say "N".
193
194#
195# IO delay types:
196#
197
198config IO_DELAY_TYPE_0X80
199 int
200 default "0"
201
202config IO_DELAY_TYPE_0XED
203 int
204 default "1"
205
206config IO_DELAY_TYPE_UDELAY
207 int
208 default "2"
209
210config IO_DELAY_TYPE_NONE
211 int
212 default "3"
213
214choice
215 prompt "IO delay type"
216 default IO_DELAY_0X80
217
218config IO_DELAY_0X80
219 bool "port 0x80 based port-IO delay [recommended]"
220 ---help---
221 This is the traditional Linux IO delay used for in/out_p.
222 It is the most tested hence safest selection here.
223
224config IO_DELAY_0XED
225 bool "port 0xed based port-IO delay"
226 ---help---
227 Use port 0xed as the IO delay. This frees up port 0x80 which is
228 often used as a hardware-debug port.
229
230config IO_DELAY_UDELAY
231 bool "udelay based port-IO delay"
232 ---help---
233 Use udelay(2) as the IO delay method. This provides the delay
234 while not having any side-effect on the IO port space.
235
236config IO_DELAY_NONE
237 bool "no port-IO delay"
238 ---help---
239 No port-IO delay. Will break on old boxes that require port-IO
240 delay for certain operations. Should work on most new machines.
241
242endchoice
243
244if IO_DELAY_0X80
245config DEFAULT_IO_DELAY_TYPE
246 int
247 default IO_DELAY_TYPE_0X80
248endif
249
250if IO_DELAY_0XED
251config DEFAULT_IO_DELAY_TYPE
252 int
253 default IO_DELAY_TYPE_0XED
254endif
255
256if IO_DELAY_UDELAY
257config DEFAULT_IO_DELAY_TYPE
258 int
259 default IO_DELAY_TYPE_UDELAY
260endif
261
262if IO_DELAY_NONE
263config DEFAULT_IO_DELAY_TYPE
264 int
265 default IO_DELAY_TYPE_NONE
266endif
267
268config DEBUG_BOOT_PARAMS
269 bool "Debug boot parameters"
270 depends on DEBUG_KERNEL
271 depends on DEBUG_FS
272 ---help---
273 This option will cause struct boot_params to be exported via debugfs.
274
275config CPA_DEBUG
276 bool "CPA self-test code"
277 depends on DEBUG_KERNEL
278 ---help---
279 Do change_page_attr() self-tests every 30 seconds.
280
281config OPTIMIZE_INLINING
282 bool "Allow gcc to uninline functions marked &amp;#39;inline&amp;#39;"
283 ---help---
284 This option determines if the kernel forces gcc to inline the functions
285 developers have marked &amp;#39;inline&amp;#39;. Doing so takes away freedom from gcc to
286 do what it thinks is best, which is desirable for the gcc 3.x series of
287 compilers. The gcc 4.x series have a rewritten inlining algorithm and
288 enabling this option will generate a smaller kernel there. Hopefully
289 this algorithm is so good that allowing gcc 4.x and above to make the
290 decision will become the default in the future. Until then this option
291 is there to test gcc for this.
292
293 If unsure, say N.
294
295config DEBUG_STRICT_USER_COPY_CHECKS
296 bool "Strict copy size checks"
297 depends on DEBUG_KERNEL &amp;&amp; !TRACE_BRANCH_PROFILING
298 ---help---
299 Enabling this option turns a certain set of sanity checks for user
300 copy operations into compile time failures.
301
302 The copy_from_user() etc checks are there to help test if there
303 are sufficient security checks on the length argument of
304 the copy operation, by having gcc prove that the argument is
305 within bounds.
306
307 If unsure, or if you run an older (pre 4.4) gcc, say N.
308
309config DEBUG_NMI_SELFTEST
310 bool "NMI Selftest"
311 depends on DEBUG_KERNEL &amp;&amp; X86_LOCAL_APIC
312 ---help---
313 Enabling this option turns on a quick NMI selftest to verify
314 that the NMI behaves correctly.
315
316 This might help diagnose strange hangs that rely on NMI to
317 function properly.
318
319 If unsure, say N.
320
321endmenu
322

Срд 20 Фев 2013 05:49:50
>>43739496
Я тебе скажу ещё одну тайну. Там несколько таких разных фраз про МАТЬ_ЕМУ_ЧТОТОТАМ_Х.
Блять и я ни разу не омич. ЧЕСТНО.

Срд 20 Фев 2013 05:50:53
LXR linux/arch/x86/Makefile

Search Prefs
1# Unified Makefile for i386 and x86_64
2
3# select defconfig based on actual architecture
4ifeq ($(ARCH),x86)
5 KBUILD_DEFCONFIG := i386_defconfig
6else
7 KBUILD_DEFCONFIG := $(ARCH)_defconfig
8endif
9
10# BITS is used as extension for files which are available in a 32 bit
11# and a 64 bit version to simplify shared Makefiles.
12# e.g.: obj-y += foo_$(BITS).o
13export BITS
14
15ifeq ($(CONFIG_X86_32),y)
16 BITS := 32
17 UTS_MACHINE := i386
18 CHECKFLAGS += -D__i386__
19
20 biarch := $(call cc-option,-m32)
21 KBUILD_AFLAGS += $(biarch)
22 KBUILD_CFLAGS += $(biarch)
23
24 ifdef CONFIG_RELOCATABLE
25 LDFLAGS_vmlinux := --emit-relocs
26 endif
27
28 KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
29
30 # Never want PIC in a 32-bit kernel, prevent breakage with GCC built
31 # with nonstandard options
32 KBUILD_CFLAGS += -fno-pic
33
34 # prevent gcc from keeping the stack 16 byte aligned
35 KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
36
37 # Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
38 # a lot more stack due to the lack of sharing of stacklots:
39 KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0400, \
40 $(call cc-option,-fno-unit-at-a-time))
41
42 # CPU-specific tuning. Anything which can be shared with UML should go here.
43 include $(srctree)/arch/x86/Makefile_32.cpu
44 KBUILD_CFLAGS += $(cflags-y)
45
46 # temporary until string.h is fixed
47 KBUILD_CFLAGS += -ffreestanding
48else
49 BITS := 64
50 UTS_MACHINE := x86_64
51 CHECKFLAGS += -D__x86_64__ -m64
52
53 KBUILD_AFLAGS += -m64
54 KBUILD_CFLAGS += -m64
55
56 # Use -mpreferred-stack-boundary=3 if supported.
57 KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
58
59 # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
60 cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
61 cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
62
63 cflags-$(CONFIG_MCORE2) += \
64 $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
65 cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
66 $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
67 cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
68 KBUILD_CFLAGS += $(cflags-y)
69
70 KBUILD_CFLAGS += -mno-red-zone
71 KBUILD_CFLAGS += -mcmodel=kernel
72
73 # -funit-at-a-time shrinks the kernel .text considerably
74 # unfortunately it makes reading oopses harder.
75 KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time)
76
77 # this works around some issues with generating unwind tables in older gccs
78 # newer gccs do it by default
79 KBUILD_CFLAGS += -maccumulate-outgoing-args
80endif
81
82ifdef CONFIG_CC_STACKPROTECTOR
83 cc_has_sp := $(srctree)/scripts/gcc-x86_$(BITS)-has-stack-protector.sh
84 ifeq ($(shell $(CONFIG_SHELL) $(cc_has_sp) $(CC) $(KBUILD_CPPFLAGS) $(biarch)),y)
85 stackp-y := -fstack-protector
86 KBUILD_CFLAGS += $(stackp-y)
87 else
88 $(warning stack protector enabled but no compiler support)
89 endif
90endif
91
92ifdef CONFIG_X86_X32
93 x32_ld_ok := $(call try-run,\
94 /bin/echo -e &amp;#39;1: .quad 1b&amp;#39; \
95 $(CC) $(KBUILD_AFLAGS) -c -x assembler -o "$$TMP" - &amp;&amp; \
96 $(OBJCOPY) -O elf32-x86-64 "$$TMP" "$$TMPO" &amp;&amp; \
97 $(LD) -m elf32_x86_64 "$$TMPO" -o "$$TMP",y,n)
98 ifeq ($(x32_ld_ok),y)
99 CONFIG_X86_X32_ABI := y
100 KBUILD_AFLAGS += -DCONFIG_X86_X32_ABI
101 KBUILD_CFLAGS += -DCONFIG_X86_X32_ABI
102 else
103 $(warning CONFIG_X86_X32 enabled but no binutils support)
104 endif
105endif
106export CONFIG_X86_X32_ABI
107
108# Don&amp;#39;t unroll struct assignments with kmemcheck enabled
109ifeq ($(CONFIG_KMEMCHECK),y)
110 KBUILD_CFLAGS += $(call cc-option,-fno-builtin-memcpy)
111endif
112
113# Stackpointer is addressed different for 32 bit and 64 bit x86
114sp-$(CONFIG_X86_32) := esp
115sp-$(CONFIG_X86_64) := rsp
116
117# do binutils support CFI?
118cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
119# is .cfi_signal_frame supported too?
120cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
121cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
122
123# does binutils support specific instructions?
124asinstr := $(call as-instr,fxsaveq (%rax),-DCONFIG_AS_FXSAVEQ=1)
125avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
126avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
127
128KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
129KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
130
131LDFLAGS := -m elf_$(UTS_MACHINE)
132
133# Speed up the build
134KBUILD_CFLAGS += -pipe
135# Workaround for a gcc prelease that unfortunately was shipped in a suse release
136KBUILD_CFLAGS += -Wno-sign-compare
137#
138KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
139# prevent gcc from generating any FP code by mistake
140KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
141KBUILD_CFLAGS += $(call cc-option,-mno-avx,)
142
143KBUILD_CFLAGS += $(mflags-y)
144KBUILD_AFLAGS += $(mflags-y)
145
146archscripts: scripts_basic
147 $(Q)$(MAKE) $(build)=arch/x86/tools relocs
148
149###
150# Syscall table generation
151
152archheaders:
153 $(Q)$(MAKE) $(build)=arch/x86/syscalls all
154
155###
156# Kernel objects
157
158head-y := arch/x86/kernel/head_$(BITS).o
159head-y += arch/x86/kernel/head$(BITS).o
160head-y += arch/x86/kernel/head.o
161
162libs-y += arch/x86/lib/
163
164# See arch/x86/Kbuild for content of core part of the kernel
165core-y += arch/x86/
166
167# drivers-y are linked after core-y
168drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
169drivers-$(CONFIG_PCI) += arch/x86/pci/
170
171# must be linked after kernel/
172drivers-$(CONFIG_OPROFILE) += arch/x86/oprofile/
173
174# suspend and hibernation support
175drivers-$(CONFIG_PM) += arch/x86/power/
176
177drivers-$(CONFIG_FB) += arch/x86/video/
178
179####
180# boot loader support. Several targets are kept for legacy purposes
181
182boot := arch/x86/boot
183
184BOOT_TARGETS = bzlilo bzdisk fdimage fdimage144 fdimage288 isoimage
185
186PHONY += bzImage $(BOOT_TARGETS)
187
188# Default kernel to build
189all: bzImage
190
191# KBUILD_IMAGE specify target image being built
192KBUILD_IMAGE := $(boot)/bzImage
193
194bzImage: vmlinux
195ifeq ($(CONFIG_X86_DECODER_SELFTEST),y)
196 $(Q)$(MAKE) $(build)=arch/x86/tools posttest
197endif
198 $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
199 $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
200 $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
201
202$(BOOT_TARGETS): vmlinux
203 $(Q)$(MAKE) $(build)=$(boot) $@
204
205PHONY += install
206install:
207 $(Q)$(MAKE) $(build)=$(boot) $@
208
209PHONY += vdso_install
210vdso_install:
211 $(Q)$(MAKE) $(build)=arch/x86/vdso $@
212
213archclean:
214 $(Q)rm -rf $(objtree)/arch/i386
215 $(Q)rm -rf $(objtree)/arch/x86_64
216 $(Q)$(MAKE) $(clean)=$(boot)
217 $(Q)$(MAKE) $(clean)=arch/x86/tools
218
219define archhelp
220 echo &amp;#39;* bzImage - Compressed kernel image (arch/x86/boot/bzImage)&amp;#39;
221 echo &amp;#39; install - Install kernel using&amp;#39;
222 echo &amp;#39; (your) ~/bin/$(INSTALLKERNEL) or&amp;#39;
223 echo &amp;#39; (distribution) /sbin/$(INSTALLKERNEL) or&amp;#39;
224 echo &amp;#39; install to $$(INSTALL_PATH) and run lilo&amp;#39;
225 echo &amp;#39; fdimage - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)&amp;#39;
226 echo &amp;#39; fdimage144 - Create 1.4MB boot floppy image (arch/x86/boot/fdimage)&amp;#39;
227 echo &amp;#39; fdimage288 - Create 2.8MB boot floppy image (arch/x86/boot/fdimage)&amp;#39;
228 echo &amp;#39; isoimage - Create a boot CD-ROM image (arch/x86/boot/image.iso)&amp;#39;
229 echo &amp;#39; bzdisk/fdimage*/isoimage also accept:&amp;#39;
230 echo &amp;#39; FDARGS="..." arguments for the booted kernel&amp;#39;
231 echo &amp;#39; FDINITRD=file initrd for the booted kernel&amp;#39;
232endef
233

Срд 20 Фев 2013 05:51:37
>>43739570
Ты ещё и АРЧЕР. Я щас буду как ОП плакать.

Срд 20 Фев 2013 05:54:59
LXR linux/arch/x86/pci/sta2x11-fixup.c

Search Prefs
1/*
2 * arch/x86/pci/sta2x11-fixup.c
3 * glue code for lib/swiotlb.c and DMA translation between STA2x11
4 * AMBA memory mapping and the X86 memory mapping
5 *
6 * ST Microelectronics ConneXt (STA2X11/STA2X10)
7 *
8 * Copyright Y 2010-2011 Wind River Systems, Inc.
9 *
10 * This program is free software; you can redistribute it and/or modify
11 * it under the terms of the GNU General Public License version 2 as
12 * published by the Free Software Foundation.
13 *
14 * This program is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
17 * See the GNU General Public License for more details.
18 *
19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software
21 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
22 *
23 */
24
25#include <linux/pci.h>
26#include <linux/pci_ids.h>
27#include <linux/export.h>
28#include <linux/list.h>
29
30#define STA2X11_SWIOTLB_SIZE (4*1024*1024)
31extern int swiotlb_late_init_with_default_size(size_t default_size);
32
33/*
34 * We build a list of bus numbers that are under the ConneXt. The
35 * main bridge hosts 4 busses, which are the 4 endpoints, in order.
36 */
37#define STA2X11_NR_EP 4 /* 0..3 included */
38#define STA2X11_NR_FUNCS 8 /* 0..7 included */
39#define STA2X11_AMBA_SIZE (512 << 20)
40
41struct sta2x11_ahb_regs { /* saved during suspend */
42 u32 base, pexlbase, pexhbase, crw;
43};
44
45struct sta2x11_mapping {
46 u32 amba_base;
47 int is_suspended;
48 struct sta2x11_ahb_regs regs[STA2X11_NR_FUNCS];
49};
50
51struct sta2x11_instance {
52 struct list_head list;
53 int bus0;
54 struct sta2x11_mapping map[STA2X11_NR_EP];
55};
56
57static LIST_HEAD(sta2x11_instance_list);
58
59/* At probe time, record new instances of this bridge (likely one only) */
60static void sta2x11_new_instance(struct pci_dev *pdev)
61{
62 struct sta2x11_instance *instance;
63
64 instance = kzalloc(sizeof(*instance), GFP_ATOMIC);
65 if (!instance)
66 return;
67 /* This has a subordinate bridge, with 4 more-subordinate ones */
68 instance->bus0 = pdev->subordinate->number + 1;
69
70 if (list_empty(&amp;sta2x11_instance_list)) {
71 int size = STA2X11_SWIOTLB_SIZE;
72 /* First instance: register your own swiotlb area */
73 dev_info(&amp;pdev->dev, "Using SWIOTLB (size %i)\n", size);
74 if (swiotlb_late_init_with_default_size(size))
75 dev_emerg(&amp;pdev->dev, "init swiotlb failed\n");
76 }
77 list_add(&amp;instance->list, &amp;sta2x11_instance_list);
78}
79DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_STMICRO, 0xcc17, sta2x11_new_instance);
80
81/*
82 * Utility functions used in this file from below
83 */
84static struct sta2x11_instance *sta2x11_pdev_to_instance(struct pci_dev *pdev)
85{
86 struct sta2x11_instance *instance;
87 int ep;
88
89 list_for_each_entry(instance, &amp;sta2x11_instance_list, list) {
90 ep = pdev->bus->number - instance->bus0;
91 if (ep >= 0 &amp;&amp; ep < STA2X11_NR_EP)
92 return instance;
93 }
94 return NULL;
95}
96
97static int sta2x11_pdev_to_ep(struct pci_dev *pdev)
98{
99 struct sta2x11_instance *instance;
100
101 instance = sta2x11_pdev_to_instance(pdev);
102 if (!instance)
103 return -1;
104
105 return pdev->bus->number - instance->bus0;
106}
107
108static struct sta2x11_mapping *sta2x11_pdev_to_mapping(struct pci_dev *pdev)
109{
110 struct sta2x11_instance *instance;
111 int ep;
112
113 instance = sta2x11_pdev_to_instance(pdev);
114 if (!instance)
115 return NULL;
116 ep = sta2x11_pdev_to_ep(pdev);
117 return instance->map + ep;
118}
119
120/* This is exported, as some devices need to access the MFD registers */
121struct sta2x11_instance *sta2x11_get_instance(struct pci_dev *pdev)
122{
123 return sta2x11_pdev_to_instance(pdev);
124}
125EXPORT_SYMBOL(sta2x11_get_instance);
126
127
128/**
129 * p2a - Translate physical address to STA2x11 AMBA address,
130 * used for DMA transfers to STA2x11
131 * @p: Physical address
132 * @pdev: PCI device (must be hosted within the connext)
133 */
134static dma_addr_t p2a(dma_addr_t p, struct pci_dev *pdev)
135{
136 struct sta2x11_mapping *map;
137 dma_addr_t a;
138
139 map = sta2x11_pdev_to_mapping(pdev);
140 a = p + map->amba_base;
141 return a;
142}
143
144/**
145 * a2p - Translate STA2x11 AMBA address to physical address
146 * used for DMA transfers from STA2x11
147 * @a: STA2x11 AMBA address
148 * @pdev: PCI device (must be hosted within the connext)
149 */
150static dma_addr_t a2p(dma_addr_t a, struct pci_dev *pdev)
151{
152 struct sta2x11_mapping *map;
153 dma_addr_t p;
154
155 map = sta2x11_pdev_to_mapping(pdev);
156 p = a - map->amba_base;
157 return p;
158}
159
160/**
161 * sta2x11_swiotlb_alloc_coherent - Allocate swiotlb bounce buffers
162 * returns virtual address. This is the only "special" function here.
163 * @dev: PCI device
164 * @size: Size of the buffer
165 * @dma_handle: DMA address
166 * @flags: memory flags
167 */
168static void *sta2x11_swiotlb_alloc_coherent(struct device *dev,
169 size_t size,
170 dma_addr_t *dma_handle,
171 gfp_t flags,
172 struct dma_attrs *attrs)
173{
174 void *vaddr;
175
176 vaddr = dma_generic_alloc_coherent(dev, size, dma_handle, flags, attrs);
177 if (!vaddr)
178 vaddr = swiotlb_alloc_coherent(dev, size, dma_handle, flags);
179 *dma_handle = p2a(*dma_handle, to_pci_dev(dev));
180 return vaddr;
181}
182
183/* We have our own dma_ops: the same as swiotlb but from alloc (above) */
184static struct dma_map_ops sta2x11_dma_ops = {
185 .alloc = sta2x11_swiotlb_alloc_coherent,
186 .free = swiotlb_free_coherent,
187 .map_page = swiotlb_map_page,
188 .unmap_page = swiotlb_unmap_page,
189 .map_sg = swiotlb_map_sg_attrs,
190 .unmap_sg = swiotlb_unmap_sg_attrs,
191 .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
192 .sync_single_for_device = swiotlb_sync_single_for_device,
193 .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
194 .sync_sg_for_device = swiotlb_sync_sg_for_device,
195 .mapping_error = swiotlb_dma_mapping_error,
196 .dma_supported = NULL, /* FIXME: we should use this instead! */
197};
198
199/* At setup time, we use our own ops if the device is a ConneXt one */
200static void sta2x11_setup_pdev(struct pci_dev *pdev)
201{
202 struct sta2x11_instance *instance = sta2x11_pdev_to_instance(pdev);
203
204 if (!instance) /* either a sta2x11 bridge or another ST device */
205 return;
206 pci_set_consistent_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
207 pci_set_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
208 pdev->dev.archdata.dma_ops = &amp;sta2x11_dma_ops;
209
210 /* We must enable all devices as master, for audio DMA to work */
211 pci_set_master(pdev);
212}
213DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_STMICRO, PCI_ANY_ID, sta2x11_setup_pdev);
214
215/*
216 * The following three functions are exported (used in swiotlb: FIXME)
217 */
218/**
219 * dma_capable - Check if device can manage DMA transfers (FIXME: kill it)
220 * @dev: device for a PCI device
221 * @addr: DMA address
222 * @size: DMA size
223 */
224bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
225{
226 struct sta2x11_mapping *map;
227
228 if (dev->archdata.dma_ops != &amp;sta2x11_dma_ops) {
229 if (!dev->dma_mask)
230 return false;
231 return addr + size - 1 <= *dev->dma_mask;
232 }
233
234 map = sta2x11_pdev_to_mapping(to_pci_dev(dev));
235
236 if (!map (addr < map->amba_base))
237 return false;
238 if (addr + size >= map->amba_base + STA2X11_AMBA_SIZE) {
239 return false;
240 }
241
242 return true;
243}
244
245/**
246 * phys_to_dma - Return the DMA AMBA address used for this STA2x11 device
247 * @dev: device for a PCI device
248 * @paddr: Physical address
249 */
250dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
251{
252 if (dev->archdata.dma_ops != &amp;sta2x11_dma_ops)
253 return paddr;
254 return p2a(paddr, to_pci_dev(dev));
255}
256
257/**
258 * dma_to_phys - Return the physical address used for this STA2x11 DMA address
259 * @dev: device for a PCI device
260 * @daddr: STA2x11 AMBA DMA address
261 */
262phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
263{
264 if (dev->archdata.dma_ops != &amp;sta2x11_dma_ops)
265 return daddr;
266 return a2p(daddr, to_pci_dev(dev));
267}
268
269
270/*
271 * At boot we must set up the mappings for the pcie-to-amba bridge.
272 * It involves device access, and the same happens at suspend/resume time
273 */
274
275#define AHB_MAPB 0xCA4
276#define AHB_CRW(i) (AHB_MAPB + 0 + (i) * 0x10)
277#define AHB_CRW_SZMASK 0xfffffc00UL
278#define AHB_CRW_ENABLE (1 << 0)
279#define AHB_CRW_WTYPE_MEM (2 << 1)
280#define AHB_CRW_ROE (1UL << 3) /* Relax Order Ena */
281#define AHB_CRW_NSE (1UL << 4) /* No Snoop Enable */
282#define AHB_BASE(i) (AHB_MAPB + 4 + (i) * 0x10)
283#define AHB_PEXLBASE(i) (AHB_MAPB + 8 + (i) * 0x10)
284#define AHB_PEXHBASE(i) (AHB_MAPB + 12 + (i) * 0x10)
285
286/* At probe time, enable mapping for each endpoint, using the pdev */
287static void sta2x11_map_ep(struct pci_dev *pdev)
288{
289 struct sta2x11_mapping *map = sta2x11_pdev_to_mapping(pdev);
290 int i;
291
292 if (!map)
293 return;
294 pci_read_config_dword(pdev, AHB_BASE(0), &amp;map->amba_base);
295
296 /* Configure AHB mapping */
297 pci_write_config_dword(pdev, AHB_PEXLBASE(0), 0);
298 pci_write_config_dword(pdev, AHB_PEXHBASE(0), 0);
299 pci_write_config_dword(pdev, AHB_CRW(0), STA2X11_AMBA_SIZE
300 AHB_CRW_WTYPE_MEM AHB_CRW_ENABLE);
301
302 /* Disable all the other windows */
303 for (i = 1; i < STA2X11_NR_FUNCS; i++)
304 pci_write_config_dword(pdev, AHB_CRW(i), 0);
305
306 dev_info(&amp;pdev->dev,
307 "sta2x11: Map EP %i: AMBA address %#8x-%#8x\n",
308 sta2x11_pdev_to_ep(pdev), map->amba_base,
309 map->amba_base + STA2X11_AMBA_SIZE - 1);
310}
311DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_STMICRO, PCI_ANY_ID, sta2x11_map_ep);
312
313#ifdef CONFIG_PM /* Some register values must be saved and restored */
314
315static void suspend_mapping(struct pci_dev *pdev)
316{
317 struct sta2x11_mapping *map = sta2x11_pdev_to_mapping(pdev);
318 int i;
319
320 if (!map)
321 return;
322
323 if (map->is_suspended)
324 return;
325 map->is_suspended = 1;
326
327 /* Save all window configs */
328 for (i = 0; i < STA2X11_NR_FUNCS; i++) {
329 struct sta2x11_ahb_regs *regs = map->regs + i;
330
331 pci_read_config_dword(pdev, AHB_BASE(i), &amp;regs->base);
332 pci_read_config_dword(pdev, AHB_PEXLBASE(i), &amp;regs->pexlbase);
333 pci_read_config_dword(pdev, AHB_PEXHBASE(i), &amp;regs->pexhbase);
334 pci_read_config_dword(pdev, AHB_CRW(i), &amp;regs->crw);
335 }
336}
337DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_STMICRO, PCI_ANY_ID, suspend_mapping);
338
339static void resume_mapping(struct pci_dev *pdev)
340{
341 struct sta2x11_mapping *map = sta2x11_pdev_to_mapping(pdev);
342 int i;
343
344 if (!map)
345 return;
346
347
348 if (!map->is_suspended)
349 goto out;
350 map->is_suspended = 0;
351
352 /* Restore all window configs */
353 for (i = 0; i < STA2X11_NR_FUNCS; i++) {
354 struct sta2x11_ahb_regs *regs = map->regs + i;
355
356 pci_write_config_dword(pdev, AHB_BASE(i), regs->base);
357 pci_write_config_dword(pdev, AHB_PEXLBASE(i), regs->pexlbase);
358 pci_write_config_dword(pdev, AHB_PEXHBASE(i), regs->pexhbase);
359 pci_write_config_dword(pdev, AHB_CRW(i), regs->crw);
360 }
361out:
362 pci_set_master(pdev); /* Like at boot, enable master on all devices */
363}
364DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_STMICRO, PCI_ANY_ID, resume_mapping);
365
366#endif /* CONFIG_PM */
367

Срд 20 Фев 2013 05:55:05
>>43739616
>Arch это архитектура
Обосрался. Успокоился. Понял, что пора спать. Спасибо.

Срд 20 Фев 2013 05:58:04
>>43739030
При долгом нофапоне можно кончить только от мысли о сексе (к примеру, не редки случаи, когда кончают во сне).

А теперь расскажи нам всем о том, что необходимо напрямую "влиять" на половой член.

Срд 20 Фев 2013 05:58:58
Вайпер няша, буду бампать, чтобы он не скучал

Срд 20 Фев 2013 05:59:06
LXR linux/samples/seccomp/bpf-helper.h

Search Prefs
1/*
2 * Example wrapper around BPF macros.
3 *
4 * Copyright Y 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
5 * Author: Will Drewry <wad@chromium.org>
6 *
7 * The code may be used by anyone for any purpose,
8 * and can serve as a starting point for developing
9 * applications using prctl(PR_SET_SECCOMP, 2, ...).
10 *
11 * No guarantees are provided with respect to the correctness
12 * or functionality of this code.
13 */
14#ifndef BPF_HELPER_H
15#define BPF_HELPER_H
16
17#include >>43738676
Ах ты ж похотливое животное.

Срд 20 Фев 2013 06:03:36
Буду и сагать и бампать. Больше цветных коней в тред!

LXR linux/fs/btrfs/dir-item.c

Search Prefs
1/*
2 * Copyright (C) 2007 Oracle. All rights reserved.
3 *
4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of the GNU General Public
6 * License v2 as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope that it will be useful,
9 * but WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * General Public License for more details.
12 *
13 * You should have received a copy of the GNU General Public
14 * License along with this program; if not, write to the
15 * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
16 * Boston, MA 021110-1307, USA.
17 */
18
19#include "ctree.h"
20#include "disk-io.h"
21#include "hash.h"
22#include "transaction.h"
23
24/*
25 * insert a name into a directory, doing overflow properly if there is a hash
26 * collision. data_size indicates how big the item inserted should be. On
27 * success a struct btrfs_dir_item pointer is returned, otherwise it is
28 * an ERR_PTR.
29 *
30 * The name is not copied into the dir item, you have to do that yourself.
31 */
32static struct btrfs_dir_item *insert_with_overflow(struct btrfs_trans_handle
33 *trans,
34 struct btrfs_root *root,
35 struct btrfs_path *path,
36 struct btrfs_key *cpu_key,
37 u32 data_size,
38 const char *name,
39 int name_len)
40{
41 int ret;
42 char *ptr;
43 struct btrfs_item *item;
44 struct extent_buffer *leaf;
45
46 ret = btrfs_insert_empty_item(trans, root, path, cpu_key, data_size);
47 if (ret == -EEXIST) {
48 struct btrfs_dir_item *di;
49 di = btrfs_match_dir_item_name(root, path, name, name_len);
50 if (di)
51 return ERR_PTR(-EEXIST);
52 btrfs_extend_item(trans, root, path, data_size);
53 } else if (ret < 0)
54 return ERR_PTR(ret);
55 WARN_ON(ret > 0);
56 leaf = path->nodes[0];
57 item = btrfs_item_nr(leaf, path->slots[0]);
58 ptr = btrfs_item_ptr(leaf, path->slots[0], char);
59 BUG_ON(data_size > btrfs_item_size(leaf, item));
60 ptr += btrfs_item_size(leaf, item) - data_size;
61 return (struct btrfs_dir_item *)ptr;
62}
63
64/*
65 * xattrs work a lot like directories, this inserts an xattr item
66 * into the tree
67 */
68int btrfs_insert_xattr_item(struct btrfs_trans_handle *trans,
69 struct btrfs_root *root,
70 struct btrfs_path *path, u64 objectid,
71 const char *name, u16 name_len,
72 const void *data, u16 data_len)
73{
74 int ret = 0;
75 struct btrfs_dir_item *dir_item;
76 unsigned long name_ptr, data_ptr;
77 struct btrfs_key key, location;
78 struct btrfs_disk_key disk_key;
79 struct extent_buffer *leaf;
80 u32 data_size;
81
82 BUG_ON(name_len + data_len > BTRFS_MAX_XATTR_SIZE(root));
83
84 key.objectid = objectid;
85 btrfs_set_key_type(&amp;key, BTRFS_XATTR_ITEM_KEY);
86 key.offset = btrfs_name_hash(name, name_len);
87
88 data_size = sizeof(*dir_item) + name_len + data_len;
89 dir_item = insert_with_overflow(trans, root, path, &amp;key, data_size,
90 name, name_len);
91 if (IS_ERR(dir_item))
92 return PTR_ERR(dir_item);
93 memset(&amp;location, 0, sizeof(location));
94
95 leaf = path->nodes[0];
96 btrfs_cpu_key_to_disk(&amp;disk_key, &amp;location);
97 btrfs_set_dir_item_key(leaf, dir_item, &amp;disk_key);
98 btrfs_set_dir_type(leaf, dir_item, BTRFS_FT_XATTR);
99 btrfs_set_dir_name_len(leaf, dir_item, name_len);
100 btrfs_set_dir_transid(leaf, dir_item, trans->transid);
101 btrfs_set_dir_data_len(leaf, dir_item, data_len);
102 name_ptr = (unsigned long)(dir_item + 1);
103 data_ptr = (unsigned long)((char *)name_ptr + name_len);
104
105 write_extent_buffer(leaf, name, name_ptr, name_len);
106 write_extent_buffer(leaf, data, data_ptr, data_len);
107 btrfs_mark_buffer_dirty(path->nodes[0]);
108
109 return ret;
110}
111
112/*
113 * insert a directory item in the tree, doing all the magic for
114 * both indexes. &amp;#39;dir&amp;#39; indicates which objectid to insert it into,
115 * &amp;#39;location&amp;#39; is the key to stuff into the directory item, &amp;#39;type&amp;#39; is the
116 * type of the inode we&amp;#39;re pointing to, and &amp;#39;index&amp;#39; is the sequence number
117 * to use for the second index (if one is created).
118 * Will return 0 or -ENOMEM
119 */
120int btrfs_insert_dir_item(struct btrfs_trans_handle *trans, struct btrfs_root
121 *root, const char *name, int name_len,
122 struct inode *dir, struct btrfs_key *location,
123 u8 type, u64 index)
124{
125 int ret = 0;
126 int ret2 = 0;
127 struct btrfs_path *path;
128 struct btrfs_dir_item *dir_item;
129 struct extent_buffer *leaf;
130 unsigned long name_ptr;
131 struct btrfs_key key;
132 struct btrfs_disk_key disk_key;
133 u32 data_size;
134
135 key.objectid = btrfs_ino(dir);
136 btrfs_set_key_type(&amp;key, BTRFS_DIR_ITEM_KEY);
137 key.offset = btrfs_name_hash(name, name_len);
138
139 path = btrfs_alloc_path();
140 if (!path)
141 return -ENOMEM;
142 path->leave_spinning = 1;
143
144 btrfs_cpu_key_to_disk(&amp;disk_key, location);
145
146 data_size = sizeof(*dir_item) + name_len;
147 dir_item = insert_with_overflow(trans, root, path, &amp;key, data_size,
148 name, name_len);
149 if (IS_ERR(dir_item)) {
150 ret = PTR_ERR(dir_item);
151 if (ret == -EEXIST)
152 goto second_insert;
153 goto out_free;
154 }
155
156 leaf = path->nodes[0];
157 btrfs_set_dir_item_key(leaf, dir_item, &amp;disk_key);
158 btrfs_set_dir_type(leaf, dir_item, type);
159 btrfs_set_dir_data_len(leaf, dir_item, 0);
160 btrfs_set_dir_name_len(leaf, dir_item, name_len);
161 btrfs_set_dir_transid(leaf, dir_item, trans->transid);
162 name_ptr = (unsigned long)(dir_item + 1);
163
164 write_extent_buffer(leaf, name, name_ptr, name_len);
165 btrfs_mark_buffer_dirty(leaf);
166
167second_insert:
168 /* FIXME, use some real flag for selecting the extra index */
169 if (root == root->fs_info->tree_root) {
170 ret = 0;
171 goto out_free;
172 }
173 btrfs_release_path(path);
174
175 ret2 = btrfs_insert_delayed_dir_index(trans, root, name, name_len, dir,
176 &amp;disk_key, type, index);
177out_free:
178 btrfs_free_path(path);
179 if (ret)
180 return ret;
181 if (ret2)
182 return ret2;
183 return 0;
184}
185
186/*
187 * lookup a directory item based on name. &amp;#39;dir&amp;#39; is the objectid
188 * we&amp;#39;re searching in, and &amp;#39;mod&amp;#39; tells us if you plan on deleting the
189 * item (use mod < 0) or changing the options (use mod > 0)
190 */
191struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
192 struct btrfs_root *root,
193 struct btrfs_path *path, u64 dir,
194 const char *name, int name_len,
195 int mod)
196{
197 int ret;
198 struct btrfs_key key;
199 int ins_len = mod < 0 ? -1 : 0;
200 int cow = mod != 0;
201
202 key.objectid = dir;
203 btrfs_set_key_type(&amp;key, BTRFS_DIR_ITEM_KEY);
204
205 key.offset = btrfs_name_hash(name, name_len);
206
207 ret = btrfs_search_slot(trans, root, &amp;key, path, ins_len, cow);
208 if (ret < 0)
209 return ERR_PTR(ret);
210 if (ret > 0)
211 return NULL;
212
213 return btrfs_match_dir_item_name(root, path, name, name_len);
214}
215
216int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
217 const char *name, int name_len)
218{
219 int ret;
220 struct btrfs_key key;
221 struct btrfs_dir_item *di;
222 int data_size;
223 struct extent_buffer *leaf;
224 int slot;
225 struct btrfs_path *path;
226
227
228 path = btrfs_alloc_path();
229 if (!path)
230 return -ENOMEM;
231
232 key.objectid = dir;
233 btrfs_set_key_type(&amp;key, BTRFS_DIR_ITEM_KEY);
234 key.offset = btrfs_name_hash(name, name_len);
235
236 ret = btrfs_search_slot(NULL, root, &amp;key, path, 0, 0);
237
238 /* return back any errors */
239 if (ret < 0)
240 goto out;
241
242 /* nothing found, we&amp;#39;re safe */
243 if (ret > 0) {
244 ret = 0;
245 goto out;
246 }
247
248 /* we found an item, look for our name in the item */
249 di = btrfs_match_dir_item_name(root, path, name, name_len);
250 if (di) {
251 /* our exact name was found */
252 ret = -EEXIST;
253 goto out;
254 }
255
256 /*
257 * see if there is room in the item to insert this
258 * name
259 */
260 data_size = sizeof(*di) + name_len + sizeof(struct btrfs_item);
261 leaf = path->nodes[0];
262 slot = path->slots[0];
263 if (data_size + btrfs_item_size_nr(leaf, slot) +
264 sizeof(struct btrfs_item) > BTRFS_LEAF_DATA_SIZE(root)) {
265 ret = -EOVERFLOW;
266 } else {
267 /* plenty of insertion room */
268 ret = 0;
269 }
270out:
271 btrfs_free_path(path);
272 return ret;
273}
274
275/*
276 * lookup a directory item based on index. &amp;#39;dir&amp;#39; is the objectid
277 * we&amp;#39;re searching in, and &amp;#39;mod&amp;#39; tells us if you plan on deleting the
278 * item (use mod < 0) or changing the options (use mod > 0)
279 *
280 * The name is used to make sure the index really points to the name you were
281 * looking for.
282 */
283struct btrfs_dir_item *
284btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
285 struct btrfs_root *root,
286 struct btrfs_path *path, u64 dir,
287 u64 objectid, const char *name, int name_len,
288 int mod)
289{
290 int ret;
291 struct btrfs_key key;
292 int ins_len = mod < 0 ? -1 : 0;
293 int cow = mod != 0;
294
295 key.objectid = dir;
296 btrfs_set_key_type(&amp;key, BTRFS_DIR_INDEX_KEY);
297 key.offset = objectid;
298
299 ret = btrfs_search_slot(trans, root, &amp;key, path, ins_len, cow);
300 if (ret < 0)
301 return ERR_PTR(ret);
302 if (ret > 0)
303 return ERR_PTR(-ENOENT);
304 return btrfs_match_dir_item_name(root, path, name, name_len);
305}
306
307struct btrfs_dir_item *
308btrfs_search_dir_index_item(struct btrfs_root *root,
309 struct btrfs_path *path, u64 dirid,
310 const char *name, int name_len)
311{
312 struct extent_buffer *leaf;
313 struct btrfs_dir_item *di;
314 struct btrfs_key key;
315 u32 nritems;
316 int ret;
317
318 key.objectid = dirid;
319 key.type = BTRFS_DIR_INDEX_KEY;
320 key.offset = 0;
321
322 ret = btrfs_search_slot(NULL, root, &amp;key, path, 0, 0);
323 if (ret < 0)
324 return ERR_PTR(ret);
325
326 leaf = path->nodes[0];
327 nritems = btrfs_header_nritems(leaf);
328
329 while (1) {
330 if (path->slots[0] >= nritems) {
331 ret = btrfs_next_leaf(root, path);
332 if (ret < 0)
333 return ERR_PTR(ret);
334 if (ret > 0)
335 break;
336 leaf = path->nodes[0];
337 nritems = btrfs_header_nritems(leaf);
338 continue;
339 }
340
341 btrfs_item_key_to_cpu(leaf, &amp;key, path->slots[0]);
342 if (key.objectid != dirid key.type != BTRFS_DIR_INDEX_KEY)
343 break;
344
345 di = btrfs_match_dir_item_name(root, path, name, name_len);
346 if (di)
347 return di;
348
349 path->slots[0]++;
350 }
351 return NULL;
352}
353
354struct btrfs_dir_item *btrfs_lookup_xattr(struct btrfs_trans_handle *trans,
355 struct btrfs_root *root,
356 struct btrfs_path *path, u64 dir,
357 const char *name, u16 name_len,
358 int mod)
359{
360 int ret;
361 struct btrfs_key key;
362 int ins_len = mod < 0 ? -1 : 0;
363 int cow = mod != 0;
364
365 key.objectid = dir;
366 btrfs_set_key_type(&amp;key, BTRFS_XATTR_ITEM_KEY);
367 key.offset = btrfs_name_hash(name, name_len);
368 ret = btrfs_search_slot(trans, root, &amp;key, path, ins_len, cow);
369 if (ret < 0)
370 return ERR_PTR(ret);
371 if (ret > 0)
372 return NULL;
373
374 return btrfs_match_dir_item_name(root, path, name, name_len);
375}
376
377/*
378 * helper function to look at the directory item pointed to by &amp;#39;path&amp;#39;
379 * this walks through all the entries in a dir item and finds one
380 * for a specific name.
381 */
382struct btrfs_dir_it

Срд 20 Фев 2013 06:04:47
LXR linux/fs/btrfs/volumes.h

Search Prefs
1/*
2 * Copyright (C) 2007 Oracle. All rights reserved.
3 *
4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of the GNU General Public
6 * License v2 as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope that it will be useful,
9 * but WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * General Public License for more details.
12 *
13 * You should have received a copy of the GNU General Public
14 * License along with this program; if not, write to the
15 * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
16 * Boston, MA 021110-1307, USA.
17 */
18
19#ifndef __BTRFS_VOLUMES_
20#define __BTRFS_VOLUMES_
21
22#include <linux/bio.h>
23#include <linux/sort.h>
24#include "async-thread.h"
25#include "ioctl.h"
26
27#define BTRFS_STRIPE_LEN (64 * 1024)
28
29struct buffer_head;
30struct btrfs_pending_bios {
31 struct bio *head;
32 struct bio *tail;
33};
34
35struct btrfs_device {
36 struct list_head dev_list;
37 struct list_head dev_alloc_list;
38 struct btrfs_fs_devices *fs_devices;
39 struct btrfs_root *dev_root;
40
41 /* regular prio bios */
42 struct btrfs_pending_bios pending_bios;
43 /* WRITE_SYNC bios */
44 struct btrfs_pending_bios pending_sync_bios;
45
46 int running_pending;
47 u64 generation;
48
49 int writeable;
50 int in_fs_metadata;
51 int missing;
52 int can_discard;
53 int is_tgtdev_for_dev_replace;
54
55 spinlock_t io_lock;
56
57 struct block_device *bdev;
58
59 /* the mode sent to blkdev_get */
60 fmode_t mode;
61
62 struct rcu_string *name;
63
64 /* the internal btrfs device id */
65 u64 devid;
66
67 /* size of the device */
68 u64 total_bytes;
69
70 /* size of the disk */
71 u64 disk_total_bytes;
72
73 /* bytes used */
74 u64 bytes_used;
75
76 /* optimal io alignment for this device */
77 u32 io_align;
78
79 /* optimal io width for this device */
80 u32 io_width;
81
82 /* minimal io size for this device */
83 u32 sector_size;
84
85 /* type and info about this device */
86 u64 type;
87
88 /* physical drive uuid (or lvm uuid) */
89 u8 uuid[BTRFS_UUID_SIZE];
90
91 /* per-device scrub information */
92 struct scrub_ctx *scrub_device;
93
94 struct btrfs_work work;
95 struct rcu_head rcu;
96 struct work_struct rcu_work;
97
98 /* readahead state */
99 spinlock_t reada_lock;
100 atomic_t reada_in_flight;
101 u64 reada_next;
102 struct reada_zone *reada_curr_zone;
103 struct radix_tree_root reada_zones;
104 struct radix_tree_root reada_extents;
105
106 /* for sending down flush barriers */
107 struct bio *flush_bio;
108 struct completion flush_wait;
109 int nobarriers;
110
111 /* disk I/O failure stats. For detailed description refer to
112 * enum btrfs_dev_stat_values in ioctl.h */
113 int dev_stats_valid;
114 int dev_stats_dirty; /* counters need to be written to disk */
115 atomic_t dev_stat_values[BTRFS_DEV_STAT_VALUES_MAX];
116};
117
118struct btrfs_fs_devices {
119 u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */
120
121 /* the device with this id has the most recent copy of the super */
122 u64 latest_devid;
123 u64 latest_trans;
124 u64 num_devices;
125 u64 open_devices;
126 u64 rw_devices;
127 u64 missing_devices;
128 u64 total_rw_bytes;
129 u64 num_can_discard;
130 u64 total_devices;
131 struct block_device *latest_bdev;
132
133 /* all of the devices in the FS, protected by a mutex
134 * so we can safely walk it to write out the supers without
135 * worrying about add/remove by the multi-device code
136 */
137 struct mutex device_list_mutex;
138 struct list_head devices;
139
140 /* devices not currently being allocated */
141 struct list_head alloc_list;
142 struct list_head list;
143
144 struct btrfs_fs_devices *seed;
145 int seeding;
146
147 int opened;
148
149 /* set when we find or add a device that doesn&amp;#39;t have the
150 * nonrot flag set
151 */
152 int rotating;
153};
154
155struct btrfs_bio_stripe {
156 struct btrfs_device *dev;
157 u64 physical;
158 u64 length; /* only used for discard mappings */
159};
160
161struct btrfs_bio;
162typedef void (btrfs_bio_end_io_t) (struct btrfs_bio *bio, int err);
163
164struct btrfs_bio {
165 atomic_t stripes_pending;
166 bio_end_io_t *end_io;
167 struct bio *orig_bio;
168 void *private;
169 atomic_t error;
170 int max_errors;
171 int num_stripes;
172 int mirror_num;
173 struct btrfs_bio_stripe stripes[];
174};
175
176struct btrfs_device_info {
177 struct btrfs_device *dev;
178 u64 dev_offset;
179 u64 max_avail;
180 u64 total_avail;
181};
182
183struct btrfs_raid_attr {
184 int sub_stripes; /* sub_stripes info for map */
185 int dev_stripes; /* stripes per dev */
186 int devs_max; /* max devs to use */
187 int devs_min; /* min devs needed */
188 int devs_increment; /* ndevs has to be a multiple of this */
189 int ncopies; /* how many copies to data has */
190};
191
192struct map_lookup {
193 u64 type;
194 int io_align;
195 int io_width;
196 int stripe_len;
197 int sector_size;
198 int num_stripes;
199 int sub_stripes;
200 struct btrfs_bio_stripe stripes[];
201};
202
203#define map_lookup_size(n) (sizeof(struct map_lookup) + \
204 (sizeof(struct btrfs_bio_stripe) * (n)))
205
206/*
207 * Restriper&amp;#39;s general type filter
208 */
209#define BTRFS_BALANCE_DATA (1ULL << 0)
210#define BTRFS_BALANCE_SYSTEM (1ULL << 1)
211#define BTRFS_BALANCE_METADATA (1ULL << 2)
212
213#define BTRFS_BALANCE_TYPE_MASK (BTRFS_BALANCE_DATA \
214 BTRFS_BALANCE_SYSTEM \
215 BTRFS_BALANCE_METADATA)
216
217#define BTRFS_BALANCE_FORCE (1ULL << 3)
218#define BTRFS_BALANCE_RESUME (1ULL << 4)
219
220/*
221 * Balance filters
222 */
223#define BTRFS_BALANCE_ARGS_PROFILES (1ULL << 0)
224#define BTRFS_BALANCE_ARGS_USAGE (1ULL << 1)
225#define BTRFS_BALANCE_ARGS_DEVID (1ULL << 2)
226#define BTRFS_BALANCE_ARGS_DRANGE (1ULL << 3)
227#define BTRFS_BALANCE_ARGS_VRANGE (1ULL << 4)
228
229/*
230 * Profile changing flags. When SOFT is set we won&amp;#39;t relocate chunk if
231 * it already has the target profile (even though it may be
232 * half-filled).
233 */
234#define BTRFS_BALANCE_ARGS_CONVERT (1ULL << 8)
235#define BTRFS_BALANCE_ARGS_SOFT (1ULL << 9)
236
237struct btrfs_balance_args;
238struct btrfs_balance_progress;
239struct btrfs_balance_control {
240 struct btrfs_fs_info *fs_info;
241
242 struct btrfs_balance_args data;
243 struct btrfs_balance_args meta;
244 struct btrfs_balance_args sys;
245
246 u64 flags;
247
248 struct btrfs_balance_progress stat;
249};
250
251int btrfs_account_dev_extents_size(struct btrfs_device *device, u64 start,
252 u64 end, u64 *length);
253
254#define btrfs_bio_size(n) (sizeof(struct btrfs_bio) + \
255 (sizeof(struct btrfs_bio_stripe) * (n)))
256
257int btrfs_alloc_dev_extent(struct btrfs_trans_handle *trans,
258 struct btrfs_device *device,
259 u64 chunk_tree, u64 chunk_objectid,
260 u64 chunk_offset, u64 start, u64 num_bytes);
261int btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
262 u64 logical, u64 *length,
263 struct btrfs_bio **bbio_ret, int mirror_num);
264int btrfs_rmap_block(struct btrfs_mapping_tree *map_tree,
265 u64 chunk_start, u64 physical, u64 devid,
266 u64 **logical, int *naddrs, int *stripe_len);
267int btrfs_read_sys_array(struct btrfs_root *root);
268int btrfs_read_chunk_tree(struct btrfs_root *root);
269int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
270 struct btrfs_root *extent_root, u64 type);
271void btrfs_mapping_init(struct btrfs_mapping_tree *tree);
272void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree);
273int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
274 int mirror_num, int async_submit);
275int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
276 fmode_t flags, void *holder);
277int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
278 struct btrfs_fs_devices **fs_devices_ret);
279int btrfs_close_devices(struct btrfs_fs_devices *fs_devices);
280void btrfs_close_extra_devices(struct btrfs_fs_info *fs_info,
281 struct btrfs_fs_devices *fs_devices, int step);
282int btrfs_find_device_missing_or_by_path(struct btrfs_root *root,
283 char *device_path,
284 struct btrfs_device **device);
285int btrfs_find_device_by_path(struct btrfs_root *root, char *device_path,
286 struct btrfs_device **device);
287int btrfs_add_device(struct btrfs_trans_handle *trans,
288 struct btrfs_root *root,
289 struct btrfs_device *device);
290int btrfs_rm_device(struct btrfs_root *root, char *device_path);
291void btrfs_cleanup_fs_uuids(void);
292int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len);
293int btrfs_grow_device(struct btrfs_trans_handle *trans,
294 struct btrfs_device *device, u64 new_size);
295struct btrfs_device *btrfs_find_device(struct btrfs_fs_info *fs_info, u64 devid,
296 u8 *uuid, u8 *fsid);
297int btrfs_shrink_device(struct btrfs_device *device, u64 new_size);
298int btrfs_init_new_device(struct btrfs_root *root, char *path);
299int btrfs_init_dev_replace_tgtdev(struct btrfs_root *root, char *device_path,
300 struct btrfs_device **device_out);
301int btrfs_balance(struct btrfs_balance_control *bctl,
302 struct btrfs_ioctl_balance_args *bargs);
303int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info);
304int btrfs_recover_balance(struct btrfs_fs_info *fs_info);
305int btrfs_pause_balance(struct btrfs_fs_info *fs_info);
306int btrfs_cancel_balance(struct btrfs_fs_info *fs_info);
307int btrfs_chunk_readonly(struct btrfs_root *root, u64 chunk_offset);
308int find_free_dev_extent(struct btrfs_device *device, u64 num_bytes,
309 u64 *start, u64 *max_avail);
310void btrfs_dev_stat_print_on_error(struct btrfs_device *device);
311void btrfs_dev_stat_inc_and_print(struct btrfs_device *dev, int index);
312int btrfs_get_dev_stats(struct btrfs_root *root,
313 struct btrfs_ioctl_get_dev_stats *stats);
314int btrfs_init_dev_stats(struct btrfs_fs_info *fs_info);
315int btrfs_run_dev_stats(struct btrfs_trans_handle *trans,
316 struct btrfs_fs_info *fs_info);
317void btrfs_rm_dev_replace_srcdev(struct btrfs_fs_info *fs_info,
318 struct btrfs_device *srcdev);
319void btrfs_destroy_dev_replace_tgtdev(struct btrfs_fs_info *fs_info,
320 struct btrfs_device *tgtdev);
321void btrfs_init_dev_replace_tgtdev_for_resume(struct btrfs_fs_info *fs_info,
322 struct btrfs_device *tgtdev);
323int btrfs_scratch_superblock(struct btrfs_device *device);
324
325static inline void btrfs_dev_stat_inc(struct btrfs_device *dev,
326 int index)
327{
328 atomic_inc(dev->dev_stat_values + index);
329 dev->dev_stats_dirty = 1;
330}
331
332static inline int btrfs_dev_stat_read(struct btrfs_device *dev,
333 int index)
334{
335 return atomic_read(dev->dev_stat_values + index);
336}
337
338static inline int btrfs_dev_stat_read_and_reset(struct btrfs_device *dev,
339 int index)
340{
341 int ret;
342
343 ret = atomic_xchg(dev->dev_stat_values + index, 0);
344 dev->dev_stats_dirty = 1;
345 return ret;
346}
347
348static inline void btrfs_dev_stat_set(struct btrfs_device *dev,
349 int index, unsigned long val)
350{
351 atomic_set(dev->dev_stat_values + index, val);
352 dev->dev_stats_dirty = 1;
353}
354
355static inline void btrfs_dev_stat_reset(struct btrfs_device *dev,
356 int index)
357{
358 btrfs_dev_stat_set(dev, index, 0);
359}
360#endif
361

Срд 20 Фев 2013 06:06:00
LXR linux/fs/btrfs/zlib.c

Search Prefs
1/*
2 * Copyright (C) 2008 Oracle. All rights reserved.
3 *
4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of the GNU General Public
6 * License v2 as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope that it will be useful,
9 * but WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * General Public License for more details.
12 *
13 * You should have received a copy of the GNU General Public
14 * License along with this program; if not, write to the
15 * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
16 * Boston, MA 021110-1307, USA.
17 *
18 * Based on jffs2 zlib code:
19 * Copyright rY 2001-2007 Red Hat, Inc.
20 * Created by David Woodhouse <dwmw2@infradead.org>
21 */
22
23#include <linux/kernel.h>
24#include <linux/slab.h>
25#include <linux/zlib.h>
26#include <linux/zutil.h>
27#include <linux/vmalloc.h>
28#include <linux/init.h>
29#include <linux/err.h>
30#include <linux/sched.h>
31#include <linux/pagemap.h>
32#include <linux/bio.h>
33#include "compression.h"
34
35struct workspace {
36 z_stream inf_strm;
37 z_stream def_strm;
38 char *buf;
39 struct list_head list;
40};
41
42static void zlib_free_workspace(struct list_head *ws)
43{
44 struct workspace *workspace = list_entry(ws, struct workspace, list);
45
46 vfree(workspace->def_strm.workspace);
47 vfree(workspace->inf_strm.workspace);
48 kfree(workspace->buf);
49 kfree(workspace);
50}
51
52static struct list_head *zlib_alloc_workspace(void)
53{
54 struct workspace *workspace;
55
56 workspace = kzalloc(sizeof(*workspace), GFP_NOFS);
57 if (!workspace)
58 return ERR_PTR(-ENOMEM);
59
60 workspace->def_strm.workspace = vmalloc(zlib_deflate_workspacesize(
61 MAX_WBITS, MAX_MEM_LEVEL));
62 workspace->inf_strm.workspace = vmalloc(zlib_inflate_workspacesize());
63 workspace->buf = kmalloc(PAGE_CACHE_SIZE, GFP_NOFS);
64 if (!workspace->def_strm.workspace
65 !workspace->inf_strm.workspace !workspace->buf)
66 goto fail;
67
68 INIT_LIST_HEAD(&amp;workspace->list);
69
70 return &amp;workspace->list;
71fail:
72 zlib_free_workspace(&amp;workspace->list);
73 return ERR_PTR(-ENOMEM);
74}
75
76static int zlib_compress_pages(struct list_head *ws,
77 struct address_space *mapping,
78 u64 start, unsigned long len,
79 struct page **pages,
80 unsigned long nr_dest_pages,
81 unsigned long *out_pages,
82 unsigned long *total_in,
83 unsigned long *total_out,
84 unsigned long max_out)
85{
86 struct workspace *workspace = list_entry(ws, struct workspace, list);
87 int ret;
88 char *data_in;
89 char *cpage_out;
90 int nr_pages = 0;
91 struct page *in_page = NULL;
92 struct page *out_page = NULL;
93 unsigned long bytes_left;
94
95 *out_pages = 0;
96 *total_out = 0;
97 *total_in = 0;
98
99 if (Z_OK != zlib_deflateInit(&amp;workspace->def_strm, 3)) {
100 printk(KERN_WARNING "btrfs: deflateInit failed\n");
101 ret = -1;
102 goto out;
103 }
104
105 workspace->def_strm.total_in = 0;
106 workspace->def_strm.total_out = 0;
107
108 in_page = find_get_page(mapping, start >> PAGE_CACHE_SHIFT);
109 data_in = kmap(in_page);
110
111 out_page = alloc_page(GFP_NOFS __GFP_HIGHMEM);
112 if (out_page == NULL) {
113 ret = -1;
114 goto out;
115 }
116 cpage_out = kmap(out_page);
117 pages[0] = out_page;
118 nr_pages = 1;
119
120 workspace->def_strm.next_in = data_in;
121 workspace->def_strm.next_out = cpage_out;
122 workspace->def_strm.avail_out = PAGE_CACHE_SIZE;
123 workspace->def_strm.avail_in = min(len, PAGE_CACHE_SIZE);
124
125 while (workspace->def_strm.total_in < len) {
126 ret = zlib_deflate(&amp;workspace->def_strm, Z_SYNC_FLUSH);
127 if (ret != Z_OK) {
128 printk(KERN_DEBUG "btrfs: deflate in loop returned %d\n",
129 ret);
130 zlib_deflateEnd(&amp;workspace->def_strm);
131 ret = -1;
132 goto out;
133 }
134
135 /* we&amp;#39;re making it bigger, give up */
136 if (workspace->def_strm.total_in > 8192 &amp;&amp;
137 workspace->def_strm.total_in <
138 workspace->def_strm.total_out) {
139 ret = -1;
140 goto out;
141 }
142 /* we need another page for writing out. Test this
143 * before the total_in so we will pull in a new page for
144 * the stream end if required
145 */
146 if (workspace->def_strm.avail_out == 0) {
147 kunmap(out_page);
148 if (nr_pages == nr_dest_pages) {
149 out_page = NULL;
150 ret = -1;
151 goto out;
152 }
153 out_page = alloc_page(GFP_NOFS __GFP_HIGHMEM);
154 if (out_page == NULL) {
155 ret = -1;
156 goto out;
157 }
158 cpage_out = kmap(out_page);
159 pages[nr_pages] = out_page;
160 nr_pages++;
161 workspace->def_strm.avail_out = PAGE_CACHE_SIZE;
162 workspace->def_strm.next_out = cpage_out;
163 }
164 /* we&amp;#39;re all done */
165 if (workspace->def_strm.total_in >= len)
166 break;
167
168 /* we&amp;#39;ve read in a full page, get a new one */
169 if (workspace->def_strm.avail_in == 0) {
170 if (workspace->def_strm.total_out > max_out)
171 break;
172
173 bytes_left = len - workspace->def_strm.total_in;
174 kunmap(in_page);
175 page_cache_release(in_page);
176
177 start += PAGE_CACHE_SIZE;
178 in_page = find_get_page(mapping,
179 start >> PAGE_CACHE_SHIFT);
180 data_in = kmap(in_page);
181 workspace->def_strm.avail_in = min(bytes_left,
182 PAGE_CACHE_SIZE);
183 workspace->def_strm.next_in = data_in;
184 }
185 }
186 workspace->def_strm.avail_in = 0;
187 ret = zlib_deflate(&amp;workspace->def_strm, Z_FINISH);
188 zlib_deflateEnd(&amp;workspace->def_strm);
189
190 if (ret != Z_STREAM_END) {
191 ret = -1;
192 goto out;
193 }
194
195 if (workspace->def_strm.total_out >= workspace->def_strm.total_in) {
196 ret = -1;
197 goto out;
198 }
199
200 ret = 0;
201 *total_out = workspace->def_strm.total_out;
202 *total_in = workspace->def_strm.total_in;
203out:
204 *out_pages = nr_pages;
205 if (out_page)
206 kunmap(out_page);
207
208 if (in_page) {
209 kunmap(in_page);
210 page_cache_release(in_page);
211 }
212 return ret;
213}
214
215static int zlib_decompress_biovec(struct list_head *ws, struct page **pages_in,
216 u64 disk_start,
217 struct bio_vec *bvec,
218 int vcnt,
219 size_t srclen)
220{
221 struct workspace *workspace = list_entry(ws, struct workspace, list);
222 int ret = 0, ret2;
223 int wbits = MAX_WBITS;
224 char *data_in;
225 size_t total_out = 0;
226 unsigned long page_in_index = 0;
227 unsigned long page_out_index = 0;
228 unsigned long total_pages_in = (srclen + PAGE_CACHE_SIZE - 1) /
229 PAGE_CACHE_SIZE;
230 unsigned long buf_start;
231 unsigned long pg_offset;
232
233 data_in = kmap(pages_in[page_in_index]);
234 workspace->inf_strm.next_in = data_in;
235 workspace->inf_strm.avail_in = min_t(size_t, srclen, PAGE_CACHE_SIZE);
236 workspace->inf_strm.total_in = 0;
237
238 workspace->inf_strm.total_out = 0;
239 workspace->inf_strm.next_out = workspace->buf;
240 workspace->inf_strm.avail_out = PAGE_CACHE_SIZE;
241 pg_offset = 0;
242
243 /* If it&amp;#39;s deflate, and it&amp;#39;s got no preset dictionary, then
244 we can tell zlib to skip the adler32 check. */
245 if (srclen > 2 &amp;&amp; !(data_in[1] &amp; PRESET_DICT) &amp;&amp;
246 ((data_in[0] &amp; 0x0f) == Z_DEFLATED) &amp;&amp;
247 !(((data_in[0]<<8) + data_in[1]) % 31)) {
248
249 wbits = -((data_in[0] >> 4) + 8);
250 workspace->inf_strm.next_in += 2;
251 workspace->inf_strm.avail_in -= 2;
252 }
253
254 if (Z_OK != zlib_inflateInit2(&amp;workspace->inf_strm, wbits)) {
255 printk(KERN_WARNING "btrfs: inflateInit failed\n");
256 return -1;
257 }
258 while (workspace->inf_strm.total_in < srclen) {
259 ret = zlib_inflate(&amp;workspace->inf_strm, Z_NO_FLUSH);
260 if (ret != Z_OK &amp;&amp; ret != Z_STREAM_END)
261 break;
262
263 buf_start = total_out;
264 total_out = workspace->inf_strm.total_out;
265
266 /* we didn&amp;#39;t make progress in this inflate call, we&amp;#39;re done */
267 if (buf_start == total_out)
268 break;
269
270 ret2 = btrfs_decompress_buf2page(workspace->buf, buf_start,
271 total_out, disk_start,
272 bvec, vcnt,
273 &amp;page_out_index, &amp;pg_offset);
274 if (ret2 == 0) {
275 ret = 0;
276 goto done;
277 }
278
279 workspace->inf_strm.next_out = workspace->buf;
280 workspace->inf_strm.avail_out = PAGE_CACHE_SIZE;
281
282 if (workspace->inf_strm.avail_in == 0) {
283 unsigned long tmp;
284 kunmap(pages_in[page_in_index]);
285 page_in_index++;
286 if (page_in_index >= total_pages_in) {
287 data_in = NULL;
288 break;
289 }
290 data_in = kmap(pages_in[page_in_index]);
291 workspace->inf_strm.next_in = data_in;
292 tmp = srclen - workspace->inf_strm.total_in;
293 workspace->inf_strm.avail_in = min(tmp,
294 PAGE_CACHE_SIZE);
295 }
296 }
297 if (ret != Z_STREAM_END)
298 ret = -1;
299 else
300 ret = 0;
301done:
302 zlib_inflateEnd(&amp;workspace->inf_strm);
303 if (data_in)
304 kunmap(pages_in[page_in_index]);
305 return ret;
306}
307
308static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
309 struct page *dest_page,
310 unsigned long start_byte,
311 size_t srclen, size_t destlen)
312{
313 struct workspace *workspace = list_entry(ws, struct workspace, list);
314 int ret = 0;
315 int wbits = MAX_WBITS;
316 unsigned long bytes_left = destlen;
317 unsigned long total_out = 0;
318 char *kaddr;
319
320 workspace->inf_strm.next_in = data_in;
321 workspace->inf_strm.avail_in = srclen;
322 workspace->inf_strm.total_in = 0;
323
324 workspace->inf_strm.next_out = workspace->buf;
325 workspace->inf_strm.avail_out = PAGE_CACHE_SIZE;
326 workspace->inf_strm.total_out = 0;
327 /* If it&amp;#39;s deflate, and it&amp;#39;s got no preset dictionary, then
328 we can tell zlib to skip the adler32 check. */
329 if (srclen > 2 &amp;&amp; !(data_in[1] &amp; PRESET_DICT) &amp;&amp;
330 ((data_in[0] &amp; 0x0f) == Z_DEFLATED) &amp;&amp;
331 !(((data_in[0]<<8) + data_in[1]) % 31)) {
332
333 wbits = -((data_in[0] >> 4) + 8);
334 workspace->inf_strm.next_in += 2;
335 workspace->inf_strm.avail_in -= 2;
336 }
337
338 if (Z_OK != zlib_inflateInit2(&amp;workspace->inf_strm, wbits)) {
339 printk(KERN_WARNING "btrfs: inflateInit failed\n");
340 return -1;
341 }
342
343 while (bytes_left > 0) {
344 unsigned long buf_start;
345 unsigned long buf_offset;
346 unsigned long bytes;
347 unsigned long pg_offset = 0;
348
349 ret = zlib_inflate(&amp;workspace->inf_strm, Z_NO_FLUSH);
350 if (ret != Z_OK &amp;&amp; ret != Z_STREAM_END)
351 break;
352
353 buf_start = total_out;
354 total_out = workspace->inf_strm.total_out;
355
356 if (total_out == buf_start) {
357 ret = -1;
358 break;
359 }
360
361 if (total_out <= start_byte)
362 goto next;
363
364 if (total_out > start_byte &amp;&amp; buf_start < start_byte)
365 buf_offset = start_byte -

Срд 20 Фев 2013 06:12:10
>>43739849
оригинал в студию

Срд 20 Фев 2013 06:14:55
XR linux/fs/jffs2/acl.c

Search Prefs
1/*
2 * JFFS2 -- Journalling Flash File System, Version 2.
3 *
4 * Copyright rY 2006 NEC Corporation
5 *
6 * Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
7 *
8 * For licensing information, see the file &amp;#39;LICENCE&amp;#39; in this directory.
9 *
10 */
11
12#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
13
14#include <linux/kernel.h>
15#include <linux/slab.h>
16#include <linux/fs.h>
17#include <linux/sched.h>
18#include <linux/time.h>
19#include <linux/crc32.h>
20#include <linux/jffs2.h>
21#include <linux/xattr.h>
22#include <linux/posix_acl_xattr.h>
23#include <linux/mtd/mtd.h>
24#include "nodelist.h"
25
26static size_t jffs2_acl_size(int count)
27{
28 if (count <= 4) {
29 return sizeof(struct jffs2_acl_header)
30 + count * sizeof(struct jffs2_acl_entry_short);
31 } else {
32 return sizeof(struct jffs2_acl_header)
33 + 4 * sizeof(struct jffs2_acl_entry_short)
34 + (count - 4) * sizeof(struct jffs2_acl_entry);
35 }
36}
37
38static int jffs2_acl_count(size_t size)
39{
40 size_t s;
41
42 size -= sizeof(struct jffs2_acl_header);
43 if (size < 4 * sizeof(struct jffs2_acl_entry_short)) {
44 if (size % sizeof(struct jffs2_acl_entry_short))
45 return -1;
46 return size / sizeof(struct jffs2_acl_entry_short);
47 } else {
48 s = size - 4 * sizeof(struct jffs2_acl_entry_short);
49 if (s % sizeof(struct jffs2_acl_entry))
50 return -1;
51 return s / sizeof(struct jffs2_acl_entry) + 4;
52 }
53}
54
55static struct posix_acl *jffs2_acl_from_medium(void *value, size_t size)
56{
57 void *end = value + size;
58 struct jffs2_acl_header *header = value;
59 struct jffs2_acl_entry *entry;
60 struct posix_acl *acl;
61 uint32_t ver;
62 int i, count;
63
64 if (!value)
65 return NULL;
66 if (size < sizeof(struct jffs2_acl_header))
67 return ERR_PTR(-EINVAL);
68 ver = je32_to_cpu(header->a_version);
69 if (ver != JFFS2_ACL_VERSION) {
70 JFFS2_WARNING("Invalid ACL version. (=%u)\n", ver);
71 return ERR_PTR(-EINVAL);
72 }
73
74 value += sizeof(struct jffs2_acl_header);
75 count = jffs2_acl_count(size);
76 if (count < 0)
77 return ERR_PTR(-EINVAL);
78 if (count == 0)
79 return NULL;
80
81 acl = posix_acl_alloc(count, GFP_KERNEL);
82 if (!acl)
83 return ERR_PTR(-ENOMEM);
84
85 for (i=0; i < count; i++) {
86 entry = value;
87 if (value + sizeof(struct jffs2_acl_entry_short) > end)
88 goto fail;
89 acl->a_entries.e_tag = je16_to_cpu(entry->e_tag);
90 acl->a_entries<em>.e_perm = je16_to_cpu(entry->e_perm);
91 switch (acl->a_entries<em>.e_tag) {
92 case ACL_USER_OBJ:
93 case ACL_GROUP_OBJ:
94 case ACL_MASK:
95 case ACL_OTHER:
96 value += sizeof(struct jffs2_acl_entry_short);
97 break;
98
99 case ACL_USER:
100 value += sizeof(struct jffs2_acl_entry);
101 if (value > end)
102 goto fail;
103 acl->a_entries<em>.e_uid =
104 make_kuid(&amp;init_user_ns,
105 je32_to_cpu(entry->e_id));
106 break;
107 case ACL_GROUP:
108 value += sizeof(struct jffs2_acl_entry);
109 if (value > end)
110 goto fail;
111 acl->a_entries<em>.e_gid =
112 make_kgid(&amp;init_user_ns,
113 je32_to_cpu(entry->e_id));
114 break;
115
116 default:
117 goto fail;
118 }
119 }
120 if (value != end)
121 goto fail;
122 return acl;
123 fail:
124 posix_acl_release(acl);
125 return ERR_PTR(-EINVAL);
126}
127
128static void *jffs2_acl_to_medium(const struct posix_acl *acl, size_t *size)
129{
130 struct jffs2_acl_header *header;
131 struct jffs2_acl_entry *entry;
132 void *e;
133 size_t i;
134
135 *size = jffs2_acl_size(acl->a_count);
136 header = kmalloc(sizeof(*header) + acl->a_count * sizeof(*entry), GFP_KERNEL);
137 if (!header)
138 return ERR_PTR(-ENOMEM);
139 header->a_version = cpu_to_je32(JFFS2_ACL_VERSION);
140 e = header + 1;
141 for (i=0; i < acl->a_count; i++) {
142 const struct posix_acl_entry *acl_e = &amp;acl->a_entries<em>;
143 entry = e;
144 entry->e_tag = cpu_to_je16(acl_e->e_tag);
145 entry->e_perm = cpu_to_je16(acl_e->e_perm);
146 switch(acl_e->e_tag) {
147 case ACL_USER:
148 entry->e_id = cpu_to_je32(
149 from_kuid(&amp;init_user_ns, acl_e->e_uid));
150 e += sizeof(struct jffs2_acl_entry);
151 break;
152 case ACL_GROUP:
153 entry->e_id = cpu_to_je32(
154 from_kgid(&amp;init_user_ns, acl_e->e_gid));
155 e += sizeof(struct jffs2_acl_entry);
156 break;
157
158 case ACL_USER_OBJ:
159 case ACL_GROUP_OBJ:
160 case ACL_MASK:
161 case ACL_OTHER:
162 e += sizeof(struct jffs2_acl_entry_short);
163 break;
164
165 default:
166 goto fail;
167 }
168 }
169 return header;
170 fail:
171 kfree(header);
172 return ERR_PTR(-EINVAL);
173}
174
175struct posix_acl *jffs2_get_acl(struct inode *inode, int type)
176{
177 struct posix_acl *acl;
178 char *value = NULL;
179 int rc, xprefix;
180
181 acl = get_cached_acl(inode, type);
182 if (acl != ACL_NOT_CACHED)
183 return acl;
184
185 switch (type) {
186 case ACL_TYPE_ACCESS:
187 xprefix = JFFS2_XPREFIX_ACL_ACCESS;
188 break;
189 case ACL_TYPE_DEFAULT:
190 xprefix = JFFS2_XPREFIX_ACL_DEFAULT;
191 break;
192 default:
193 BUG();
194 }
195 rc = do_jffs2_getxattr(inode, xprefix, "", NULL, 0);
196 if (rc > 0) {
197 value = kmalloc(rc, GFP_KERNEL);
198 if (!value)
199 return ERR_PTR(-ENOMEM);
200 rc = do_jffs2_getxattr(inode, xprefix, "", value, rc);
201 }
202 if (rc > 0) {
203 acl = jffs2_acl_from_medium(value, rc);
204 } else if (rc == -ENODATA rc == -ENOSYS) {
205 acl = NULL;
206 } else {
207 acl = ERR_PTR(rc);
208 }
209 if (value)
210 kfree(value);
211 if (!IS_ERR(acl))
212 set_cached_acl(inode, type, acl);
213 return acl;
214}
215
216static int __jffs2_set_acl(struct inode *inode, int xprefix, struct posix_acl *acl)
217{
218 char *value = NULL;
219 size_t size = 0;
220 int rc;
221
222 if (acl) {
223 value = jffs2_acl_to_medium(acl, &amp;size);
224 if (IS_ERR(value))
225 return PTR_ERR(value);
226 }
227 rc = do_jffs2_setxattr(inode, xprefix, "", value, size, 0);
228 if (!value &amp;&amp; rc == -ENODATA)
229 rc = 0;
230 kfree(value);
231
232 return rc;
233}
234
235static int jffs2_set_acl(struct inode *inode, int type, struct posix_acl *acl)
236{
237 int rc, xprefix;
238
239 if (S_ISLNK(inode->i_mode))
240 return -EOPNOTSUPP;
241
242 switch (type) {
243 case ACL_TYPE_ACCESS:
244 xprefix = JFFS2_XPREFIX_ACL_ACCESS;
245 if (acl) {
246 umode_t mode = inode->i_mode;
247 rc = posix_acl_equiv_mode(acl, &amp;mode);
248 if (rc < 0)
249 return rc;
250 if (inode->i_mode != mode) {
251 struct iattr attr;
252
253 attr.ia_valid = ATTR_MODE ATTR_CTIME;
254 attr.ia_mode = mode;
255 attr.ia_ctime = CURRENT_TIME_SEC;
256 rc = jffs2_do_setattr(inode, &amp;attr);
257 if (rc < 0)
258 return rc;
259 }
260 if (rc == 0)
261 acl = NULL;
262 }
263 break;
264 case ACL_TYPE_DEFAULT:
265 xprefix = JFFS2_XPREFIX_ACL_DEFAULT;
266 if (!S_ISDIR(inode->i_mode))
267 return acl ? -EACCES : 0;
268 break;
269 default:
270 return -EINVAL;
271 }
272 rc = __jffs2_set_acl(inode, xprefix, acl);
273 if (!rc)
274 set_cached_acl(inode, type, acl);
275 return rc;
276}
277
278int jffs2_init_acl_pre(struct inode *dir_i, struct inode *inode, umode_t *i_mode)
279{
280 struct posix_acl *acl;
281 int rc;
282
283 cache_no_acl(inode);
284
285 if (S_ISLNK(*i_mode))
286 return 0; /* Symlink always has no-ACL */
287
288 acl = jffs2_get_acl(dir_i, ACL_TYPE_DEFAULT);
289 if (IS_ERR(acl))
290 return PTR_ERR(acl);
291
292 if (!acl) {
293 *i_mode &amp;= ~current_umask();
294 } else {
295 if (S_ISDIR(*i_mode))
296 set_cached_acl(inode, ACL_TYPE_DEFAULT, acl);
297
298 rc = posix_acl_create(&amp;acl, GFP_KERNEL, i_mode);
299 if (rc < 0)
300 return rc;
301 if (rc > 0)
302 set_cached_acl(inode, ACL_TYPE_ACCESS, acl);
303
304 posix_acl_release(acl);
305 }
306 return 0;
307}
308
309int jffs2_init_acl_post(struct inode *inode)
310{
311 int rc;
312
313 if (inode->i_default_acl) {
314 rc = __jffs2_set_acl(inode, JFFS2_XPREFIX_ACL_DEFAULT, inode->i_default_acl);
315 if (rc)
316 return rc;
317 }
318
319 if (inode->i_acl) {
320 rc = __jffs2_set_acl(inode, JFFS2_XPREFIX_ACL_ACCESS, inode->i_acl);
321 if (rc)
322 return rc;
323 }
324
325 return 0;
326}
327
328int jffs2_acl_chmod(struct inode *inode)
329{
330 struct posix_acl *acl;
331 int rc;
332
333 if (S_ISLNK(inode->i_mode))
334 return -EOPNOTSUPP;
335 acl = jffs2_get_acl(inode, ACL_TYPE_ACCESS);
336 if (IS_ERR(acl) !acl)
337 return PTR_ERR(acl);
338 rc = posix_acl_chmod(&amp;acl, GFP_KERNEL, inode->i_mode);
339 if (rc)
340 return rc;
341 rc = jffs2_set_acl(inode, ACL_TYPE_ACCESS, acl);
342 posix_acl_release(acl);
343 return rc;
344}
345
346static size_t jffs2_acl_access_listxattr(struct dentry *dentry, char *list,
347 size_t list_size, const char *name, size_t name_len, int type)
348{
349 const int retlen = sizeof(POSIX_ACL_XATTR_ACCESS);
350
351 if (list &amp;&amp; retlen <= list_size)
352 strcpy(list, POSIX_ACL_XATTR_ACCESS);
353 return retlen;
354}
355
356static size_t jffs2_acl_default_listxattr(struct dentry *dentry, char *list,
357 size_t list_size, const char *name, size_t name_len, int type)
358{
359 const int retlen = sizeof(POSIX_ACL_XATTR_DEFAULT);
360
361 if (list &amp;&amp; retlen <= list_size)
362 strcpy(list, POSIX_ACL_XATTR_DEFAULT);
363 return retlen;
364}
365
366static int jffs2_acl_getxattr(struct dentry *dentry, const char *name,
367 void *buffer, size_t size, int type)
368{
369 struct posix_acl *acl;
370 int rc;
371
372 if (name[0] != &amp;#39;\0&amp;#39;)
373 return -EINVAL;
374
375 acl = jffs2_get_acl(dentry->d_inode, type);
376 if (IS_ERR(acl))
377 return PTR_ERR(acl);
378 if (!acl)
379 return -ENODATA;
380 rc = posix_acl_to_xattr(&amp;init_user_ns, acl, buffer, size);
381 posix_acl_release(acl);
382
383 return rc;
384}
385
386static int jffs2_acl_setxattr(struct dentry *dentry, const char *name,
387 const void *value, size_t size, int flags, int type)
388{
389 struct posix_acl *acl;
390 int rc;
391
392 if (name[0] != &amp;#39;\0&amp;#39;)
393 return -EINVAL;
394 if (!inode_owner_or_capable(dentry->d_inode))
395 return -EPERM;
396
397 if (value) {
398 acl = posix_acl_from_xattr(&amp;init_user_ns, value, size);
399 if (IS_ERR(acl))
400 return PTR_ERR(acl);
401 if (acl) {
402 rc = posix_acl_valid(acl);
403 if (rc)
404 goto out;
405 }
406 } else {
407 acl = NULL;
408 }
409 rc = jffs2_set_acl(dentry->d_inode, type, acl);
410 out:
411 posix_acl_release(acl);
412 return rc;
413}
414
415const struct xattr_handler jffs2_acl_access_xattr
</em></em></em></em></em>

Срд 20 Фев 2013 06:16:27
XR linux/sound/firewire/scs1x.c

Search Prefs
1/*
2 * Stanton Control System 1 MIDI driver
3 *
4 * Copyright Y Clemens Ladisch <clemens@ladisch.de>
5 * Licensed under the terms of the GNU General Public License, version 2.
6 */
7
8#include <linux/device.h>
9#include <linux/firewire.h>
10#include <linux/firewire-constants.h>
11#include <linux/interrupt.h>
12#include <linux/module.h>
13#include <linux/mod_devicetable.h>
14#include <linux/slab.h>
15#include <linux/string.h>
16#include <linux/wait.h>
17#include <sound/core.h>
18#include <sound/initval.h>
19#include <sound/rawmidi.h>
20#include "lib.h"
21
22#define OUI_STANTON 0x001260
23#define MODEL_SCS_1M 0x001000
24#define MODEL_SCS_1D 0x002000
25
26#define HSS1394_ADDRESS 0xc007dedadadaULL
27#define HSS1394_MAX_PACKET_SIZE 64
28
29#define HSS1394_TAG_USER_DATA 0x00
30#define HSS1394_TAG_CHANGE_ADDRESS 0xf1
31
32struct scs {
33 struct snd_card *card;
34 struct fw_unit *unit;
35 struct fw_address_handler hss_handler;
36 struct fw_transaction transaction;
37 bool transaction_running;
38 bool output_idle;
39 u8 output_status;
40 u8 output_bytes;
41 bool output_escaped;
42 bool output_escape_high_nibble;
43 u8 input_escape_count;
44 struct snd_rawmidi_substream *output;
45 struct snd_rawmidi_substream *input;
46 struct tasklet_struct tasklet;
47 wait_queue_head_t idle_wait;
48 u8 *buffer;
49};
50
51static const u8 sysex_escape_prefix[] = {
52 0xf0, /* SysEx begin */
53 0x00, 0x01, 0x60, /* Stanton DJ */
54 0x48, 0x53, 0x53, /* "HSS" */
55};
56
57static int scs_output_open(struct snd_rawmidi_substream *stream)
58{
59 struct scs *scs = stream->rmidi->private_data;
60
61 scs->output_status = 0;
62 scs->output_bytes = 1;
63 scs->output_escaped = false;
64
65 return 0;
66}
67
68static int scs_output_close(struct snd_rawmidi_substream *stream)
69{
70 return 0;
71}
72
73static void scs_output_trigger(struct snd_rawmidi_substream *stream, int up)
74{
75 struct scs *scs = stream->rmidi->private_data;
76
77 ACCESS_ONCE(scs->output) = up ? stream : NULL;
78 if (up) {
79 scs->output_idle = false;
80 tasklet_schedule(&amp;scs->tasklet);
81 }
82}
83
84static void scs_write_callback(struct fw_card *card, int rcode,
85 void *data, size_t length, void *callback_data)
86{
87 struct scs *scs = callback_data;
88
89 if (rcode == RCODE_GENERATION) {
90 /* TODO: retry this packet */
91 }
92
93 scs->transaction_running = false;
94 tasklet_schedule(&amp;scs->tasklet);
95}
96
97static bool is_valid_running_status(u8 status)
98{
99 return status >= 0x80 &amp;&amp; status <= 0xef;
100}
101
102static bool is_one_byte_cmd(u8 status)
103{
104 return status == 0xf6
105 status >= 0xf8;
106}
107
108static bool is_two_bytes_cmd(u8 status)
109{
110 return (status >= 0xc0 &amp;&amp; status <= 0xdf)
111 status == 0xf1
112 status == 0xf3;
113}
114
115static bool is_three_bytes_cmd(u8 status)
116{
117 return (status >= 0x80 &amp;&amp; status <= 0xbf)
118 (status >= 0xe0 &amp;&amp; status <= 0xef)
119 status == 0xf2;
120}
121
122static bool is_invalid_cmd(u8 status)
123{
124 return status == 0xf4
125 status == 0xf5
126 status == 0xf9
127 status == 0xfd;
128}
129
130static void scs_output_tasklet(unsigned long data)
131{
132 struct scs *scs = (void *)data;
133 struct snd_rawmidi_substream *stream;
134 unsigned int i;
135 u8 byte;
136 struct fw_device *dev;
137 int generation;
138
139 if (scs->transaction_running)
140 return;
141
142 stream = ACCESS_ONCE(scs->output);
143 if (!stream) {
144 scs->output_idle = true;
145 wake_up(&amp;scs->idle_wait);
146 return;
147 }
148
149 i = scs->output_bytes;
150 for (;;) {
151 if (snd_rawmidi_transmit(stream, &amp;byte, 1) != 1) {
152 scs->output_bytes = i;
153 scs->output_idle = true;
154 wake_up(&amp;scs->idle_wait);
155 return;
156 }
157 /*
158 * Convert from real MIDI to what I think the device expects (no
159 * running status, one command per packet, unescaped SysExs).
160 */
161 if (scs->output_escaped &amp;&amp; byte < 0x80) {
162 if (scs->output_escape_high_nibble) {
163 if (i < HSS1394_MAX_PACKET_SIZE) {
164 scs->buffer = byte << 4;
165 scs->output_escape_high_nibble = false;
166 }
167 } else {
168 scs->buffer[i++] = byte &amp; 0x0f;
169 scs->output_escape_high_nibble = true;
170 }
171 } else if (byte < 0x80) {
172 if (i == 1) {
173 if (!is_valid_running_status(scs->output_status))
174 continue;
175 scs->buffer[0] = HSS1394_TAG_USER_DATA;
176 scs->buffer[i++] = scs->output_status;
177 }
178 scs->buffer[i++] = byte;
179 if ((i == 3 &amp;&amp; is_two_bytes_cmd(scs->output_status))
180 (i == 4 &amp;&amp; is_three_bytes_cmd(scs->output_status)))
181 break;
182 if (i == 1 + ARRAY_SIZE(sysex_escape_prefix) &amp;&amp;
183 !memcmp(scs->buffer + 1, sysex_escape_prefix,
184 ARRAY_SIZE(sysex_escape_prefix))) {
185 scs->output_escaped = true;
186 scs->output_escape_high_nibble = true;
187 i = 0;
188 }
189 if (i >= HSS1394_MAX_PACKET_SIZE)
190 i = 1;
191 } else if (byte == 0xf7) {
192 if (scs->output_escaped) {
193 if (i >= 1 &amp;&amp; scs->output_escape_high_nibble &amp;&amp;
194 scs->buffer[0] != HSS1394_TAG_CHANGE_ADDRESS)
195 break;
196 } else {
197 if (i > 1 &amp;&amp; scs->output_status == 0xf0) {
198 scs->buffer[i++] = 0xf7;
199 break;
200 }
201 }
202 i = 1;
203 scs->output_escaped = false;
204 } else if (!is_invalid_cmd(byte) &amp;&amp;
205 byte < 0xf8) {
206 i = 1;
207 scs->buffer[0] = HSS1394_TAG_USER_DATA;
208 scs->buffer[i++] = byte;
209 scs->output_status = byte;
210 scs->output_escaped = false;
211 if (is_one_byte_cmd(byte))
212 break;
213 }
214 }
215 scs->output_bytes = 1;
216 scs->output_escaped = false;
217
218 scs->transaction_running = true;
219 dev = fw_parent_device(scs->unit);
220 generation = dev->generation;
221 smp_rmb(); /* node_id vs. generation */
222 fw_send_request(dev->card, &amp;scs->transaction, TCODE_WRITE_BLOCK_REQUEST,
223 dev->node_id, generation, dev->max_speed,
224 HSS1394_ADDRESS, scs->buffer, i,
225 scs_write_callback, scs);
226}
227
228static void scs_output_drain(struct snd_rawmidi_substream *stream)
229{
230 struct scs *scs = stream->rmidi->private_data;
231
232 wait_event(scs->idle_wait, scs->output_idle);
233}
234
235static struct snd_rawmidi_ops output_ops = {
236 .open = scs_output_open,
237 .close = scs_output_close,
238 .trigger = scs_output_trigger,
239 .drain = scs_output_drain,
240};
241
242static int scs_input_open(struct snd_rawmidi_substream *stream)
243{
244 struct scs *scs = stream->rmidi->private_data;
245
246 scs->input_escape_count = 0;
247
248 return 0;
249}
250
251static int scs_input_close(struct snd_rawmidi_substream *stream)
252{
253 return 0;
254}
255
256static void scs_input_trigger(struct snd_rawmidi_substream *stream, int up)
257{
258 struct scs *scs = stream->rmidi->private_data;
259
260 ACCESS_ONCE(scs->input) = up ? stream : NULL;
261}
262
263static void scs_input_escaped_byte(struct snd_rawmidi_substream *stream,
264 u8 byte)
265{
266 u8 nibbles[2];
267
268 nibbles[0] = byte >> 4;
269 nibbles[1] = byte &amp; 0x0f;
270 snd_rawmidi_receive(stream, nibbles, 2);
271}
272
273static void scs_input_midi_byte(struct scs *scs,
274 struct snd_rawmidi_substream *stream,
275 u8 byte)
276{
277 if (scs->input_escape_count > 0) {
278 scs_input_escaped_byte(stream, byte);
279 scs->input_escape_count--;
280 if (scs->input_escape_count == 0)
281 snd_rawmidi_receive(stream, (const u8[]) { 0xf7 }, 1);
282 } else if (byte == 0xf9) {
283 snd_rawmidi_receive(stream, sysex_escape_prefix,
284 ARRAY_SIZE(sysex_escape_prefix));
285 scs_input_escaped_byte(stream, 0x00);
286 scs_input_escaped_byte(stream, 0xf9);
287 scs->input_escape_count = 3;
288 } else {
289 snd_rawmidi_receive(stream, &amp;byte, 1);
290 }
291}
292
293static void scs_input_packet(struct scs *scs,
294 struct snd_rawmidi_substream *stream,
295 const u8 *data, unsigned int bytes)
296{
297 unsigned int i;
298
299 if (data[0] == HSS1394_TAG_USER_DATA) {
300 for (i = 1; i < bytes; ++i)
301 scs_input_midi_byte(scs, stream, data<em>);
302 } else {
303 snd_rawmidi_receive(stream, sysex_escape_prefix,
304 ARRAY_SIZE(sysex_escape_prefix));
305 for (i = 0; i < bytes; ++i)
306 scs_input_escaped_byte(stream, data<em>);
307 snd_rawmidi_receive(stream, (const u8[]) { 0xf7 }, 1);
308 }
309}
310
311static struct snd_rawmidi_ops input_ops = {
312 .open = scs_input_open,
313 .close = scs_input_close,
314 .trigger = scs_input_trigger,
315};
316
317static int scs_create_midi(struct scs *scs)
318{
319 struct snd_rawmidi *rmidi;
320 int err;
321
322 err = snd_rawmidi_new(scs->card, "SCS.1x", 0, 1, 1, &amp;rmidi);
323 if (err < 0)
324 return err;
325 snprintf(rmidi->name, sizeof(rmidi->name),
326 "%s MIDI", scs->card->shortname);
327 rmidi->info_flags = SNDRV_RAWMIDI_INFO_OUTPUT
328 SNDRV_RAWMIDI_INFO_INPUT
329 SNDRV_RAWMIDI_INFO_DUPLEX;
330 rmidi->private_data = scs;
331 snd_rawmidi_set_ops(rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT, &amp;output_ops);
332 snd_rawmidi_set_ops(rmidi, SNDRV_RAWMIDI_STREAM_INPUT, &amp;input_ops);
333
334 return 0;
335}
336
337static void handle_hss(struct fw_card *card, struct fw_request *request,
338 int tcode, int destination, int source, int generation,
339 unsigned long long offset, void *data, size_t length,
340 void *callback_data)
341{
342 struct scs *scs = callback_data;
343 struct snd_rawmidi_substream *stream;
344
345 if (offset != scs->hss_handler.offset) {
346 fw_send_response(card, request, RCODE_ADDRESS_ERROR);
347 return;
348 }
349 if (tcode != TCODE_WRITE_QUADLET_REQUEST &amp;&amp;
350 tcode != TCODE_WRITE_BLOCK_REQUEST) {
351 fw_send_response(card, request, RCODE_TYPE_ERROR);
352 return;
353 }
354
355 if (length >= 1) {
356 stream = ACCESS_ONCE(scs->input);
357 if (stream)
358 scs_input_packet(scs, stream, data, length);
359 }
360
361 fw_send_response(card, request, RCODE_COMPLETE);
362}
363
364static int scs_init_hss_address(struct scs *scs)
365{
366 __be64 data;
367 int err;
368
369 data = cpu_to_be64(((u64)HSS1394_TAG_CHANGE_ADDRESS << 56)
370 scs->hss_handler.offset);
371 err = snd_fw_transaction(scs->unit, TCODE_WRITE_BLOCK_REQUEST,
372 HSS1394_ADDRESS, &amp;data, 8);
373 if (err < 0)
374 dev_err(&amp;scs->unit->device, "HSS1394 communication failed\n");
375
376 return err;
377}
378
379static void scs_card_free(struct snd_card *card)
380{
381 struct scs *scs = card->private_data;
382
383 fw_core_remove_address_handler(&amp;scs->hss_handler);
384 kfree(scs->buffer);
385}
386
387static int scs_probe(struct device *unit_dev)
388{
389 struct fw_unit *unit = fw_unit(unit_dev);
390 struct fw_device *fw_dev = fw_parent_device(unit);
391 struct snd_card *card;
392 struct scs *scs;
393 int err;
394
395 err = snd_card_create(-16, NULL, THIS_MODULE, sizeof(*scs), &amp;card);
396 if (err < 0)
397 return err;
398 snd_card_set_dev(card, unit_dev);
399
400 scs = card->private_data;
401 scs->card = card;
402 scs->unit = unit;
403 tasklet_init(&amp;scs->tasklet, scs_output_tasklet, (unsigned long)scs);
</em></em>

Срд 20 Фев 2013 08:54:27
>>43738751
Пробурить твой задний проход. Тоже.


← К списку тредов