Search results for: feature noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2557

Search results for: feature noise

187 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 124
186 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 144
185 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 347
184 Reagentless Detection of Urea Based on ZnO-CuO Composite Thin Film

Authors: Neha Batra Bali, Monika Tomar, Vinay Gupta

Abstract:

A reagentless biosensor for detection of urea based on ZnO-CuO composite thin film is presented in following work. Biosensors have immense potential for varied applications ranging from environmental to clinical testing, health care, and cell analysis. Immense growth in the field of biosensors is due to the huge requirement in today’s world to develop techniques which are both cost effective and accurate for prevention of disease manifestation. The human body comprises of numerous biomolecules which in their optimum levels are essential for functioning. However mismanaged levels of these biomolecules result in major health issues. Urea is one of the key biomolecules of interest. Its estimation is of paramount significance not only for healthcare sector but also from environmental perspectives. If level of urea in human blood/serum is abnormal, i.e., above or below physiological range (15-40mg/dl)), it may lead to diseases like renal failure, hepatic failure, nephritic syndrome, cachexia, urinary tract obstruction, dehydration, shock, burns and gastrointestinal, etc. Various metal nanoparticles, conducting polymer, metal oxide thin films, etc. have been exploited to act as matrix to immobilize urease to fabricate urea biosensor. Amongst them, Zinc Oxide (ZnO), a semiconductor metal oxide with a wide band gap is of immense interest as an efficient matrix in biosensors by virtue of its natural abundance, biocompatibility, good electron communication feature and high isoelectric point (9.5). In spite of being such an attractive candidate, ZnO does not possess a redox couple of its own which necessitates the use of electroactive mediators for electron transfer between the enzyme and the electrode, thereby causing hindrance in realization of integrated and implantable biosensor. In the present work, an effort has been made to fabricate a matrix based on ZnO-CuO composite prepared by pulsed laser deposition (PLD) technique in order to incorporate redox properties in ZnO matrix and to utilize the same for reagentless biosensing applications. The prepared bioelectrode Urs/(ZnO-CuO)/ITO/glass exhibits high sensitivity (70µAmM⁻¹cm⁻²) for detection of urea (5-200 mg/dl) with high stability (shelf life ˃ 10 weeks) and good selectivity (interference ˂ 4%). The enhanced sensing response obtained for composite matrix is attributed to the efficient electron exchange between ZnO-CuO matrix and immobilized enzymes, and subsequently fast transfer of generated electrons to the electrode via matrix. The response is encouraging for fabricating reagentless urea biosensor based on ZnO-CuO matrix.

Keywords: biosensor, reagentless, urea, ZnO-CuO composite

Procedia PDF Downloads 267
183 Modification of Escherichia coli PtolT Expression Vector via Site-Directed Mutagenesis

Authors: Yakup Ulusu, Numan Eczacıoğlu, İsa Gökçe, Helen Waller, Jeremy H. Lakey

Abstract:

Besides having the appropriate amino acid sequence to perform the function of proteins, it is important to have correct conformation after this sequence to process. To consist of this conformation depends on the amino acid sequence at the primary structure, hydrophobic interaction, chaperones and enzymes in charge of folding etc. Misfolded proteins are not functional and tend to be aggregated. Cysteine originating disulfide cross-links make stable this conformation of functional proteins. When two of the cysteine amino acids come side by side, disulfide bond is established that forms a cystine bridge. Due to this feature cysteine plays an important role on the formation of three-dimensional structure of many proteins. There are two cysteine amino acids (C44, C69) in the Tol-A-III protein. Unlike protein disulfide bonds from within his own, any non-specific cystine bridge causes a change in the three dimensional structure of the protein. Proteins can be expressed in various host cells as directly or fusion (chimeric). As a result of overproduction of the recombinant proteins, aggregation of insoluble proteins in the host cell can occur by forming a crystal structure called inclusion body. In general fusion proteins are produced for provide affinity tags to make proteins more soluble and production of some toxic proteins via fusion protein expression system like pTolT. Proteins can be modified by using a site-directed mutagenesis. By this way, creation of non-specific disulfide crosslinks can be prevented at fusion protein expression system via the present cysteine replaced by another amino acid such as serine, glycine or etc. To do this, we need; a DNA molecule that contains the gene that encodes for the target protein, required primers for mutation to be designed according to site directed mutagenesis reaction. This study was aimed to be replaced cysteine encoding codon TGT with serine encoding codon AGT. For this sense and reverse primers designed (given below) and used site-directed mutagenesis reaction. Several new copy of the template plasmid DNA has been formed with above mentioned mutagenic primers via polymerase chain reaction (PCR). PCR product consists of both the master template DNA (wild type) and the new DNA sequences containing mutations. Dpn-l endonuclease restriction enzyme which is specific for methylated DNA and cuts them to the elimination of the master template DNA. E. coli cells obtained after transformation were incubated LB medium with antibiotic. After purification of plasmid DNA from E. coli, the presence of the mutation was determined by DNA sequence analysis. Developed this new plasmid is called PtolT-δ.

Keywords: site directed mutagenesis, Escherichia coli, pTolT, protein expression

Procedia PDF Downloads 332
182 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population

Authors: Naava Dihi, Jon Anson

Abstract:

Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.

Keywords: Israel, nuptiality, ascription, achievement

Procedia PDF Downloads 89
181 Deasphalting of Crude Oil by Extraction Method

Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov

Abstract:

The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.

Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy

Procedia PDF Downloads 218
180 Learning to Transform, Transforming to Learn: An Exploration of Teacher Professional Learning in the 4Cs (Communication, Collaboration, Creativity and Critical Reflection) in the Primary (K-6) Setting

Authors: Susan E Orlovich

Abstract:

Ongoing, effective teacher professional learning is acknowledged as a critical influence on teacher practice. However, it is unclear whether the elements of effective professional learning result in transformed teacher practice in the classroom. This research project is interested in 4C teacher professional learning. The professional learning practices to assist teachers in transforming their practice to integrate the 4C capabilities seldom feature in the academic literature. The 4Cs are a shorthand way of representing the concepts of communication, collaboration, creativity, and critical reflection and refer to the capabilities needed for deeper learning, personal growth, and effective participation in society. The New South Wales curriculum review (2020) acknowledges that identifying, teaching, and assessing the 4C capabilities are areas of challenge for teachers. However, it also recognises that it is essential for teachers to build the confidence and capacity to understand, teach and assess the capabilities necessary for learners to thrive in the 21st century. This qualitative research project explores the professional learning experiences of sixteen teachers in four different primaries (K-6) settings in Sydney, Australia, who are learning to integrate, teach and assess the 4Cs. The project draws on the Theory of Practice Architecture as a framework to analyse and interpret teachers' experiences in each site. The sixteen participants in the study are teachers from four primary settings and include early career, experienced, and teachers in leadership roles (including the principal). In addition, some of the participants are also teachers who are learning within a Community of Practice (CoP) as their school setting is engaged in a 4C professional learning, Community of Practice. Qualitative and arts-informed research methods are utilised to examine the cultural-discursive, social-political, and material-economic practice arrangements of the site, explore how these arrangements may have shaped the professional learning experiences of teachers, and in turn, influence the teaching practices of the 4Cs in the setting. The research is in the data analysis stage (October 2022), with preliminary findings pending. The research objective is to investigate the elements of the professional learning experiences undertaken by teachers to teach the 4Cs in the primary setting. The lens of practice architectures theory is used to identify the influence of the practice architectures on critical praxis in each site and examine how the practice arrangements enable or constrain the teaching of 4C capabilities. This research aims to offer deep insight into the practice arrangements which may enable or constrain teacher professional learning in the 4Cs. Such insight from this study may contribute to a better understanding of the practices that enable teachers to transform their practice to achieve the integration, teaching, and assessment of the 4C capabilities.

Keywords: 4Cs, communication, collaboration, creativity, critical reflection, teacher professional learning

Procedia PDF Downloads 80
179 Peculiarities of Snow Cover in Belarus

Authors: Aleh Meshyk, Anastasiya Vouchak

Abstract:

On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.

Keywords: density, depth, snow, water content in snow

Procedia PDF Downloads 134
178 Examining Gender Bias in the Sport Concussion Assessment Tool 3 (SCAT3): A Differential Item Functioning Analysis in NCAA Sports

Authors: Rachel M. Edelstein, John D. Van Horn, Karen M. Schmidt, Sydney N. Cushing

Abstract:

As a consequence of sports-related concussions, female athletes have been documented as reporting more symptoms than their male counterparts, in addition to incurring longer periods of recovery. However, the role of sex and its potential influence on symptom reporting and recovery outcomes in concussion management has not been completely explored. The present aims to investigate the relationship between female concussion symptom severity and the presence of assessment bias. The Sport Concussion Assessment Tool 3 (SCAT3), collected by the NCAA and DoD CARE Consortium, was quantified at five different time points post-concussion. N= 1,258 NCAA athletes, n= 473 female (soccer, rugby, lacrosse, ice hockey) and n=785 male athletes (football, rugby, lacrosse, ice hockey). A polytomous Item Response Theory (IRT) Graded Response Model (GRM) was used to assess the relationship between sex and symptom reporting. Differential Item Functioning (DIF) and Differential Group Functioning (DGF) were used to examine potential group-level bias. Interactions for DIF were utilized to explore the impact of sex on symptom reporting among NCAA male and female athletes throughout and after their concussion recovery. DIF was significantly detected after B-H corrections displayed in limited items; however, one symptom, “Pressure in Head” (-0.29, p=0.04 vs -0.20, p =0.04), was statistically significant at both < 6 hours and 24-48 hours. Thus, implies that at < 6 hours, males were 29% less likely to indicate “Pressure in the Head” compared to female athletes and 20% less likely at 24-48 hours. Overall, the DGF suggested significant group differences, suggesting that male athletes might be at a higher risk for returning to play prematurely (logits = -0.38, p < 0.001). However, after analyzing the SCAT 3, a clinically relevant trend was discovered. Twelve out of the twenty-two symptoms suggest higher difficulty in female athletes within three or more of the five-time points. These symptoms include Balance Problems, Blurry Vision, Confusion, Dizziness, Don’t Feel Right, Feel in Fog, Feel Slow Down, Low Energy, Neck Pain, Sensitivity to Light, Sensitivity to Noise, Trouble Falling Asleep. Despite a lack of statistical significance, this tendency is contrary to current literature stating that males may be unclear on symptoms, but females may be more honest in reporting symptoms. Further research, which includes possible modifying socioecological factors, is needed to determine whether females may consistently experience more symptoms and require longer recovery times or if, parsimoniously, males tend to present their symptoms and readiness for play differently than females. Such research will help to improve the validity of current assumptions concerning male as compared to female head injuries and optimize individualized treatments for sports-related head injuries.

Keywords: female athlete, sports-related concussion, item response theory, concussion assessment

Procedia PDF Downloads 41
177 Road Map to Health: Palestinian Workers in Israel's Construction Sector

Authors: Maya de Vries Kedem, Abir Jubran, Diana Baron

Abstract:

Employment in Israel offers Palestinian workers an income double what they can earn in the West Bank. The need to support their families leads many educated Palestinians to forgo finding work in their profession in the Palestinian Authority and instead look for employment in those sectors open to them in Israel, particularly the construction, agriculture, and industry sectors. The International Labor Organization estimated that about 1,200 workers in Israel die every year because of occupational diseases (diseases caused by working conditions). Construction workers in Israel are constantly exposed to dust, noise, chemical materials, and work in awkward postures, which require prolonged bending, repetitive motion, and other risk factors that can lead to illnesses and death. Occupational health is vastly neglected in Israel and construction workers are particularly at risk . As of June 2022, the Israeli quota in the construction sector for Palestinian workers stood at 80,000. Kav LaOved released a new study on the state of occupational health among Palestinian workers employed in construction in Israel. The study Roadmap to Health: Palestinian Workers in Israel's Construction Sector reviews the extent to which the health of Palestinian workers is protected at work in Israel. The report includes analysis of a survey administered to 256 workers as well as interviews with 10 workers and with 5 Israeli occupational health experts. Report highlights: • Among survey respondents, 63.9% stated that safety procedures to protect their health are rarely followed in their workplace (e.g., taking breaks, using protective gear, following restrictions on lifting heavy items, and having inspectors regularly on site to monitor safety). • All 256 Palestinian workers who participated to the survey said that their health has been directly or indirectly harmed by working in Israel and reported suffering from the following problems: orthopedic problems such as joint, hand, leg or knee problems (100%); headaches (75%); back problems (36.3%); eye problems (23.8%); breathing problems (17.6%); chronic pain (14.8%); heart problems (7.8%); and skin problems (3.5%). • Workers who are injured or do not feel well often continue working for fear of losing their payment for that day. About half of the 256 survey respondents reported that they pay brokerage fees to find an employer with a work permit, often paying between 2,000 and 3,000 NIS per month. “I have an obligation—I pay about NIS 120 a day for my permit, [and] I have to pay for it whether I work or not" a worker said. • Most Palestinian construction workers suffer from stress and mental health problems. Workers pointed to several issues that greatly affect their mood and mental state: daily crossings at crowded checkpoints where workers stand for hours; lack of sleep due to leaving home daily at 3:00-3:30 am; commuting two to four hours to work in each direction; and abusive work environments. A worker told KLO that the sight of thousands of workers standing together at the checkpoint causes “high blood pressure and the feeling that you are going to be squeezed.” Another said, “I felt that my bones would break.” In the survey workers reported suffering from insomnia (70.1%), breathing difficulties (35.8%), chest pressure (27.6%), or rapid pulse rate (12.2%).

Keywords: construction sector, palestinian workers, occupational health, Israel, occupation

Procedia PDF Downloads 59
176 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 409
175 Modern Pilgrimage Narratives and India’s Heterogeneity

Authors: Alan Johnson

Abstract:

This paper focuses on modern pilgrimage narratives about sites affiliated with Indian religious expressions located both within and outside India. The paper uses a multidisciplinary approach to examine poetry, personal essays, and online attestations of pilgrimage to illustrate how non-religious ideas coexist with outwardly religious ones, exemplifying a characteristically Indian form of syncretism that pre-dates Western ideas of pluralism. The paper argues that the syncretism on display in these modern creative works refutes the current exclusionary vision of India as a primordially Hindu-nationalist realm. A crucial premise of this argument is that the narrative’s intrinsic heteroglossia, so evident in India’s historically rich variety of stories and symbols, belies this reactionary version of Hindu nationalism. Equally important to this argument, therefore, is the vibrancy of Hindu sites outside India, such as the Batu Caves temple complex in Kuala Lumpur, Malaysia. The literary texts examined in this paper include, first, Arun Kolatkar’s famous 1976 collection of poems, titled Jejuri, about a visit to the pilgrimage site of the same name in Maharashtra. Here, the modern, secularized visitor from Bombay (Mumbai) contemplates the effect of the temple complex on himself and on the other, more worshipful visitors. Kolatkar’s modernist poems reflect the narrator’s typically modern-Indian ambivalence for holy ruins, for although they do not evoke a conventionally religious feeling in him, they nevertheless possess an aura of timelessness that questions the narrator’s time-conscious sensibility. The paper bookends Kolatkar’s Jejuri with considerations of an early-twentieth-century text, online accounts by visitors to the Batu Caves, and a recent, more conventional Hindu account of pilgrimage. For example, the pioneering graphic artist Mukul Chandra Dey published in 1917, My Pilgrimages to Ajanta and Bagh, in which he devotes an entire chapter to the life of the Buddha as a means of illustrating the layering of stories that is a characteristic feature of sacred sites in India. In a different but still syncretic register, Jawaharlal Nehru, India’s first prime minister, and a committed secularist proffers India’s ancient pilgrimage network as a template for national unity in his classic 1946 autobiography The Discovery of India. Narrative is the perfect vehicle for highlighting this layering of sensibilities, for a single text can juxtapose the pilgrim-narrator’s description with that of a far older pilgrimage, a juxtaposition that establishes an imaginative connection between otherwise distanced actors, and between them and the reader.

Keywords: India, literature, narrative, syncretism

Procedia PDF Downloads 130
174 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 232
173 Lithological Mapping and Iron Deposits Identification in El-Bahariya Depression, Western Desert, Egypt, Using Remote Sensing Data Analysis

Authors: Safaa M. Hassan; Safwat S. Gabr, Mohamed F. Sadek

Abstract:

This study is proposed for the lithological and iron oxides detection in the old mine areas of El-Bahariya Depression, Western Desert, using ASTER and Landsat-8 remote sensing data. Four old iron ore occurrences, namely; El-Gedida, El-Haraa, Ghurabi, and Nasir mine areas found in the El-Bahariya area. This study aims to find new high potential areas for iron mineralization around El-Baharyia depression. Image processing methods such as principle component analysis (PCA) and band ratios (b4/b5, b5/b6, b6/b7, and 4/2, 6/7, band 6) images were used for lithological identification/mapping that includes the iron content in the investigated area. ASTER and Landsat-8 visible and short-wave infrared data found to help mapping the ferruginous sandstones, iron oxides as well as the clay minerals in and around the old mines area of El-Bahariya depression. Landsat-8 band ratio and the principle component of this study showed well distribution of the lithological units, especially ferruginous sandstones and iron zones (hematite and limonite) along with detection of probable high potential areas for iron mineralization which can be used in the future and proved the ability of Landsat-8 and ASTER data in mapping these features. Minimum Noise Fraction (MNF), Mixture Tuned Matched Filtering (MTMF), pixel purity index methods as well as Spectral Ange Mapper classifier algorithm have been successfully discriminated the hematite and limonite content within the iron zones in the study area. Various ASTER image spectra and ASD field spectra of hematite and limonite and the surrounding rocks are compared and found to be consistent in terms of the presence of absorption features at range from 1.95 to 2.3 μm for hematite and limonite. Pixel purity index algorithm and two sub-pixel spectral methods, namely Mixture Tuned Matched Filtering (MTMF) and matched filtering (MF) methods, are applied to ASTER bands to delineate iron oxides (hematite and limonite) rich zones within the rock units. The results are validated in the field by comparing image spectra of spectrally anomalous zone with the USGS resampled laboratory spectra of hematite and limonite samples using ASD measurements. A number of iron oxides rich zones in addition to the main surface exposures of the El-Gadidah Mine, are confirmed in the field. The proposed method is a successful application of spectral mapping of iron oxides deposits in the exposed rock units (i.e., ferruginous sandstone) and present approach of both ASTER and ASD hyperspectral data processing can be used to delineate iron-rich zones occurring within similar geological provinces in any parts of the world.

Keywords: Landsat-8, ASTER, lithological mapping, iron exploration, western desert

Procedia PDF Downloads 116
172 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 67
171 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 157
170 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 294
169 Impaired Transient Receptor Potential Vanilloid 4-Mediated Dilation of Mesenteric Arteries in Spontaneously Hypertensive Rats

Authors: Ammar Boudaka, Maryam Al-Suleimani, Hajar BaOmar, Intisar Al-Lawati, Fahad Zadjali

Abstract:

Background: Hypertension is increasingly becoming a matter of medical and public health importance. The maintenance of normal blood pressure requires a balance between cardiac output and total peripheral resistance. The endothelium, through the release of vasodilating factors, plays an important role in the control of total peripheral resistance and hence blood pressure homeostasis. Transient Receptor Potential Vanilloid type 4 (TRPV4) is a mechanosensitive non-selective cation channel that is expressed on the endothelium and contributes to endothelium-mediated vasodilation. So far, no data are available about the morphological and functional status of this channel in hypertensive cases. Objectives: This study aimed to investigate whether there is any difference in the morphological and functional features of TRPV4 in the mesenteric artery of normotensive and hypertensive rats. Methods: Functional feature of TRPV4 in four experimental animal groups: young and adult Wistar-Kyoto rats (WKY-Y and WKY-A), young and adult spontaneously hypertensive rats (SHR-Y and SHR-A), was studied by adding 5 µM 4αPDD (TRPV4 agonist) to mesenteric arteries mounted in a four-chamber wire myograph and pre-contracted with 4 µM phenylephrine. The 4αPDD-induced response was investigated in the presence and absence of 1 µM HC067047 (TRPV4 antagonist), 100 µM L-NAME (nitric oxide synthase inhibitor), and endothelium. The morphological distribution of TRPV4 in the wall of rat mesenteric arteries was investigated by immunostaining. Real-time PCR was used in order to investigate mRNA expression level of TRPV4 in the mesenteric arteries of the four groups. The collected data were expressed as mean ± S.E.M. with n equal to the number of animals used (one vessel was taken from each rat). To determine the level of significance, statistical comparisons were performed using the student’s t-test and considered to be significantly different at p<0.05. Results: 4αPDD induced a relaxation response in the mesenteric arterial preparations (WKY-Y: 85.98% ± 4.18; n = 5) that was markedly inhibited by HC067047 (18.30% ± 2.86; n= 5; p<0.05), endothelium removal (19.93% ± 1.50; n = 5; p<0.05) and L-NAME (28.18% ± 3.09; n = 5; p<0.05). The 4αPDD-induced relaxation was significantly lower in SHR-Y compared to WKY-Y (SHR-Y: 70.96% ± 3.65; n = 6, WKY-Y: 85.98% ± 4.18; n = 5-6, p<0.05. Moreover, the 4αPDD-induced response was significantly lower in WKY-A than WKY-Y (WKY-A: 75.58 ± 1.30; n = 5, WKY-Y: 85.98% ± 4.18; n = 5, p<0.05). Immunostaining study showed immunofluorescent signal confined to the endothelial layer of the mesenteric arteries. The expression of TRPV4 mRNA in SHR-Y was significantly lower than in WKY-Y (SHR-Y; 0.67RU ± 0.34; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Furthermore, TRPV4 mRNA expression in WKY-A was lower than its expression in WKY-Y (WKY-A: 0.62RU ± 0.37; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Conclusion: Stimulation of TRPV4, which is expressed on the endothelium of rat mesenteric artery, triggers an endothelium-mediated relaxation response that markedly decreases with hypertension and growing up changes due to downregulation of TRPV4 expression.

Keywords: hypertension, endothelium, mesenteric artery, TRPV4

Procedia PDF Downloads 284
168 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 137
167 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 226
166 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project

Authors: Juliana C. Salvadori, Jose Carlos Felix

Abstract:

This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.

Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson

Procedia PDF Downloads 260
165 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia

Authors: Lynne McCormack, Andrew Digges

Abstract:

Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.

Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis

Procedia PDF Downloads 67
164 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 239
163 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 56
162 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 106
161 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application

Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski

Abstract:

Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.

Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)

Procedia PDF Downloads 138
160 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films

Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya

Abstract:

Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.

Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film

Procedia PDF Downloads 340
159 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 49
158 Obtaining Composite Cotton Fabric by Cyclodextrin Grafting

Authors: U. K. Sahin, N. Erdumlu, C. Saricam, I. Gocek, M. H. Arslan, H. Acikgoz-Tufan, B. Kalav

Abstract:

Finishing is an important part of fabric processing with which a wide range of features are imparted to greige or colored fabrics for various end-uses. Especially, by the addition or impartation of nano-scaled particles to the fabric structure composite fabrics, a kind of composite materials can be acquired. Composite materials, generally shortened as composites or in other words composition materials, are engineered or naturally occurring materials made from two or more component materials with significantly different physical, mechanical or chemical characteristics remaining separate and distinctive at the macroscopic or microscopic scale within the end product structure. Therefore, the technique finishing which is one of the fundamental methods to be applied on fabrics for obtainment of composite fabrics with many functionalities was used in the current study with the same purpose. However, regardless of the finishing materials applied, the efficient life of finished product on offering desired feature is low, since the durability of finishes on the material is limited. Any increase in durability of these finishes on textiles would enhance the life of use for textiles, which will result in happier users. Therefore, in this study, since higher durability was desired for the finishing materials fixed on the fabrics, nano-scaled hollow structured cyclodextrins were chemically imparted by grafting to the structure of conventional cotton fabrics by the help of finishing technique in order to be fixed permanently. By this way, a processed and functionalized base fabric having potential to be treated in the subsequent processes with many different finishing agents and nanomaterials could be obtained. Henceforth, this fabric can be used as a multi-functional fabric due to the encapturing ability of cyclodextrins to molecules/particles via physical/chemical means. In this study, scoured and rinsed woven bleached plain weave 100% cotton fabrics were utilized because textiles made of cotton are the most demanded textile products in the textile market by the textile consumers in daily life. Cotton fabric samples were immersed in treating baths containing β-cyclodextrin and 1,2,3,4-butanetetracarboxylic acid and to reduce the curing temperature the catalyst sodium hypophosphite monohydrate was used. All impregnated fabric samples were pre-dried. The reaction of grafting was performed in dry state. The treated and cured fabric samples were rinsed with warm distilled water and dried. The samples were dried for 4 h and weighed before and after finishing and rinsing. Stability and durability of β-cyclodextrins on fabric surface against external factors such as washing as well as strength of functionalized fabric in terms of tensile and tear strength were tested. Presence and homogeneity of distribution of β-cyclodextrins on fabric surface were characterized.

Keywords: cotton fabric, cyclodextrine, improved durability, multifunctional composite textile

Procedia PDF Downloads 270