Search results for: Photoluminescence Spectroscopy (PL) and Field Effect Scanning electron microscopy (FESEM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24532

Search results for: Photoluminescence Spectroscopy (PL) and Field Effect Scanning electron microscopy (FESEM)

1522 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study

Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali

Abstract:

Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.

Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin

Procedia PDF Downloads 193
1521 Representation and Reality: Media Influences on Japanese Attitudes towards China

Authors: Shuk Ting Kinnia Yau

Abstract:

As China has become more and more influential in the global and geo-political arena, mutual understanding between Japan and China has also become a topic of paramount importance. There have always been tensions between the two countries, but unfortunately, each country tends to blame the other for fanning emotions. This research will investigate portrayals of China and the Chinese people in Japanese media such as newspapers, TV news, TV drama, and cinema over this period, focusing on media sources that have particularly wide viewership or readership. By doing so, it attempts to detect any general trends in the positive or negative character of such portrayals and to see if they correlate with the results of surveys of attitudes among the general population. To the degree that correlations may be found, the question arises as to whether the media portrayals are a reflection of societal attitudes towards the Chinese, on one hand, or may be playing a role in promoting such attitudes, on the other. The relationship here is, without doubt, more complex than a simple one-way relationship of cause and effect, but indications of some direction of causality may be suggested by trends in one occurring before or after the other. Evidence will also be sought of possible longer-term trends in media portrayals of China and the Chinese people in Japan during the post-2012 period, i.e., Abe Shinzo’s second term as prime minister, in comparison to earlier periods. Perceptions of Japan’s view of China and the Chinese, both inside and outside the scholarly world, tend to be oversimplified and are often incomprehensive. This research calls attention to the role played by the media in promoting or de-promoting Sino-Japanese relations. By analyzing the nature and background of images of China and the Chinese people presented in the Japanese media, especially under the new Abe Regime, this research seeks to promote a more balanced and comprehensive understanding of attitudes in Japanese society towards its gigantic neighbor. Scholars have seen the increasingly fragile Sino-Japanese relationship as inseparable from the real-world political conflicts that have become more frequent in recent years and have sought to draw a correlation between the two. The influence of the media, however, remains a mostly under-explored domain in the academic world. Against this background, this research aims to provide an enriched scholarly understanding of Japan’s perception of China by investigating to what extent such perception can be seen to be affected by subjective or selective forms of presentation of China found in the Japanese media, or vice versa.

Keywords: Abe Shinzo, China, Japan, media

Procedia PDF Downloads 309
1520 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 90
1519 Modulation of the Innate Immune Response in Bovine Udder Tissue by Epigenetic Modifiers

Authors: Holm Zerbe, Laura Macias, Hans-Joachim Schuberth, Wolfram Petzl

Abstract:

Mastitis is among the most important production diseases in cows. It accounts for large parts of antimicrobial drug use in the dairy industry worldwide. Due to the imminent normative to reduce the use of antimicrobial drugs in livestock, new ways for therapy and prophylaxis of mastitis are needed. Recently epigenetic regulation of inflammation by chromatin modifications has increasingly drawn attention. Currently, some epigenetic modifiers have already been approved for the use in humans, however little is known about their actions in the bovine system. The aim of our study was to investigate whether three selected epigenetic modifiers (Vitamin D3, SAHA and S2101) influence the initial immune response towards mastitis pathogens in bovine udder tissue in vitro. Tissue explants of the teat cistern and udder parenchyma were collected from 21 cows and were incubated for 36 hours in the absence and presence of epigenetic modifiers. Additionally, the tissue was stimulated with heat-inactivated particles of Escherichia coli and Staphylococcus aureus, which are regarded as two of the most important mastitis pathogens. After incubation, the explants were tested by RT-qPCR for transcript abundances of immune-related candidate genes. Gene expression was validated in culture supernatants by an AlphaLISA assay. Furthermore, the culture supernatants were analyzed for their chemotactic capacity through a chemotaxis assay. Statistical analysis of data was performed with the program ‘R’ version 3.2.3. Vitamin D3 had no effect on the immune response of udder tissue in vitro after stimulation with mastitis pathogens. The epigenetic modifiers SAHA and S2101 however significantly blocked the pathogen-induced upregulation of CXCL8, TNFα, S100A9 and LAP (P < 0.05). The regulation of IL10 was not affected by treatment with SAHA and S2101. Transcript abundances for CXCL8 were reflected by IL8 contents and chemotactic activity in culture supernatants. In conclusion, these data show the potential of epigenetic modifiers (SAHA and S2101) to block overshooting inflammation in the udder. Thus epigenetic modifiers may serve in future as immune modulators for the treatment and/or prophylaxis of clinical mastitis. (Funded by Deutsche Forschungsgemeinschaft PE 1495/2-1).

Keywords: mastitis, cattle, epigenetics, immunomodulation

Procedia PDF Downloads 235
1518 Role of Dispositional Affect in Relationship between Life Events and Life Satisfaction among Adolescents

Authors: Milica Lazic, Jovana Jestrovic

Abstract:

The aim of this research is to examine moderating role of positive and negative affect, defined as traits, in relationship between a number of stressful life events to which an individual is exposed and life satisfaction. The tendency to experience positive and negative emotions is considered as relatively independent, and life satisfaction depends on presence and intensity of emotions of different valence. However, the role of positive and negative affect can be much more complex. It can change the direction and/or intensity of correlation between a number of stressful life events and life satisfaction. Thus, this question is important for two reasons, (I) better comprehension of inconsistent result of correlation intensity between stressful events and life satisfaction (II) verification on what conditions positive and negative affect have a protective role, and on what conditions the positive and/or negative affect is vulnerability factor. Longitudinal data were collected in two waves from 660 adolescents. Firstly, participants completed the Positive and Negative Affect Schedule. A year later, Life events questionnaire, which measures the number of stressful events in the past six months and Satisfaction with Life Scale were administered. The data were analyzed using hierarchical regression analyses: three-way interaction. The results show that number of life events, positive and negative effect contribute to the level of life satisfaction. The check of moderation role shows the significant three-way interaction of number of life event, and both, positive and negative affect. Individuals who report high level of positive affect, estimate to be moderate to highly satisfied with their lives, regardless of number of stressors to which they are exposed and also how often they experience negative emotions. Individuals, who often experience negative emotions and rarely positive, report the lowest level of life satisfaction. It doesn't change despite the number of stressors they were exposed to. Individuals who report that rarely experience not only positive than also negative emotions estimate different level of life satisfaction depending on number of stressors they were exposed to. Under the influence of numerous stressors, their level of life satisfaction is low, and it's equal to life satisfaction level of individuals who often experience negative and rarely positive emotions. The result of this research shows that tendency to often experience positive emotions is the protective factor in situation when individuals are exposed to high number of stressors. On the other hand, tendency to rarely experience positive emotions present vulnerability factor. Conclusions and practical implications are further discussed.

Keywords: life events, life satisfaction, subjective well-being, positive and negative affect

Procedia PDF Downloads 296
1517 Basil Plants Attract and Benefit Generalist Lacewing Predator Ceraeochrysa cubana Hagen (Neuroptera: Chrysopidae) by Providing Nutritional Resources

Authors: Michela C. Batista Matos, Madelaine Venzon, Elem F. Martins, Erickson C. Freitas, Adenir V. Teodoro, Maira C. M. Fonseca, Angelo Pallini

Abstract:

Aromatic plant species are capable of producing and releasing volatile organic compounds spontaneously, which can repel or attract beneficial insects such as generalist predators of herbivores. Attractive plants could be used as crop companion plants to promote biological control of pests. In order to select such plants for future use in horticulture fields, we assessed the attractiveness of the aromatic plants Ocimum basilicum L. (basil), Mentha piperita L. (peppermint), Melissa officinalis L. (lemon balm) and Cordia verbenacea DC (black sage) to adults of the generalist lacewing predator Ceraeochrysa cubana Hagen (Neuroptera: Chrysopidae). This predator is commonly found in agroecosystems in Brazil and it feeds on aphids, mites, small caterpillars, insect eggs and scales. We further tested the effect of these plant species on the survival, development and oviposition of C. cubana. Finally, we evaluated the survival of larvae and adults of C. cubana when only flowers of basil were offered. Females of C. cubana were attracted to basil but not to the remaining aromatic plants. Larvae survival was higher when individuals had access only to basil leaf than when they had access to peppermint, lemon balm, black sage or water. Adult survival on leaf treatments and on water was no longer than three days. Flowers of basil enhanced predator larvae survival, yet they did not reach adulthood. Adults fed on basil flowers lived longer compared with water, but they did not reproduce. Basil is a promising aromatic plant species to be considered for conservation biological control programs. Besides being attractive to adults of the generalist predator, it benefits larvae and adults by providing nutritional resources when prey or other resources are absent. Financial support: CNPq, FAPEMIG and CAPES (Brazil).

Keywords: basil, chrysopidae, conservation biological control, companion plants

Procedia PDF Downloads 257
1516 The Impact of Passive Design Factors on House Energy Efficiency for New Cities in Egypt

Authors: Mahmoud Mourad, Ahmad Hamza H. Ali, S.Ookawara, Ali Kamel Abdel-Rahman, Nady M. Abdelkariem

Abstract:

The energy consumption of a house can be affected simultaneously by many building design factors related to its main architectural features, building elements and materials. This study focuses on the impact of passive design factors on the annual energy consumption of a suggested prototype house for single-family detached houses of 240 m2 in two floors, each floor of 120 m2 in new Egyptian cities located in (Alexandria - Cairo - Siwa - Assuit – Aswan) which resemble five different climatic zones (Northern coast – Northern upper Egypt - dessert region- Southern upper Egypt – South Egypt) respectively. This study present the effect of the passive design factors affecting the building energy consumption as building orientation, building material (walls, roof and slabs), building type (residential, educational, commercial), building occupancy (type of occupant, no. of occupant, age), building landscape and site selection, building envelope and fenestration (glazing material, shading), and building plan form. This information can be used to estimate the approximate saving in energy consumption, which would result on a change in the design datum for the future houses development, and to identify the major design problems for energy efficiency. To achieve the above objective, this paper presents a study for the factors affecting on the building energy consumption in the hot arid area in new Egyptian cities in five different climatic zones , followed by defining the energy needs for different utilization in this suggested prototype house. Consequently, a detailed analysis of the available Renewable Energy utilizations technologies used in the suggested home, and a calculation of the energy as a function of yearly distribution that required for this home will presented. The results obtained from building annual energy analyses show that architecture passive design factors saves about 35% of the annual energy consumption. It shows also passive cooling techniques saves about 45%, and renewable energy systems saves about 40% of the annual energy needs for this proposed home depending on the cities location on the climatic zones.

Keywords: architecture passive design factors, energy efficient homes, Egypt new cites, renewable energy technologies

Procedia PDF Downloads 401
1515 Creative Mathematically Modelling Videos Developed by Engineering Students

Authors: Esther Cabezas-Rivas

Abstract:

Ordinary differential equations (ODE) are a fundamental part of the curriculum for most engineering degrees, and students typically have difficulties in the subsequent abstract mathematical calculations. To enhance their motivation and profit that they are digital natives, we propose a teamwork project that includes the creation of a video. It should explain how to model mathematically a real-world problem transforming it into an ODE, which should then be solved using the tools learned in the lectures. This idea was indeed implemented with first-year students of a BSc in Engineering and Management during the period of online learning caused by the outbreak of COVID-19 in Spain. Each group of 4 students was assigned a different topic: model a hot water heater, search for the shortest path, design the quickest route for delivery, cooling a computer chip, the shape of the hanging cables of the Golden Gate, detecting land mines, rocket trajectories, etc. These topics should be worked out through two complementary channels: a written report describing the problem and a 10-15 min video on the subject. The report includes the following items: description of the problem to be modeled, detailed obtention of the ODE that models the problem, its complete solution, and interpretation in the context of the original problem. We report the outcomes of this teaching in context and active learning experience, including the feedback received by the students. They highlighted the encouragement of creativity and originality, which are skills that they do not typically relate to mathematics. Additionally, the video format (unlike a common presentation) has the advantage of allowing them to critically review and self-assess the recording, repeating some parts until the result is satisfactory. As a side effect, they felt more confident about their oral abilities. In short, students agreed that they had fun preparing the video. They recognized that it was tricky to combine deep mathematical contents with entertainment since, without the latter, it is impossible to engage people to view the video till the end. Despite this difficulty, after the activity, they claimed to understand better the material, and they enjoyed showing the videos to family and friends during and after the project.

Keywords: active learning, contextual teaching, models in differential equations, student-produced videos

Procedia PDF Downloads 145
1514 Synergistic and Antagonistic Interactions between Garlic Extracts and Metformin in Diabetes Treatment

Authors: Ikram Elsiddig, Yacouba Djamila, Amna Hamad

Abstract:

Abstract—The worldwide increasing of using herbs in form of medicine with or without prescription medications potentiates the interactions between herbal products and conventional medicines; due to more research for herb-drug interactions are needed. for a long time hyperglycemia had been treated with several medicinal plants. A. sativum, belonging to the Liliaceae family is well known for its medicinal uses in African traditional medicine, it used for treating of many human diseases mainly diabetes, high cholesterol and high blood pressure. The purpose of this study is to determine the interaction effect between A. sativum bulb extracts and metformin drug used in diabetes treatment. The in vitro and in vivo evaluation were conducted by glucose reuptake using isolated rats hemidiaphgrams tissue and by estimate glucose tolerance in glucose-loaded wistar albino rats. The results showed that, petroleum ether, chloroform and ethyl acetate extracts were found to have activity of glucose uptake in isolated rats hemidiaphgrams of 24.11 mg/g, 19.07 mg/g and 15.66 mg/g compared to metformin drug of 17 mg/g. These activity were reducded to 17.8 mg/g, 13.59 mg/g and 14.46 mg/g after combination with metformin, metformin itself reduced to 13.59 mg/g, 14.46 mg/g and 12.71 mg/g in comination with chloroform and ethyl acetate. These decrease in activity could be due to herbal–drug interaction between the extracts of A. sativum bulb and metformin drug. The interaction between A. sativum extract and metformin was also shown by in vivo study on the induced hyperglycemic rats. The glucose level after administered of 200 mg/kg was found to be increase with 47.2 % and 17.7% at first and second hour compared to the increase of blood glucose in the control group of 82.6% and76.7%.. At fourth hour the glucose level was became less than normal with 3.4% compared to control which continue to increase with 68.2%. Dose of 400 mg/kg at first hour showed increase in blood glucose of 31.5 %, at second and fourth hours the glucose level was became less than normal with decrease of 3.2 % and 30.4%. After combination the activity was found to be less than that of extract at both high and low dose, whereas, at first and second hour, the glucose level was found to be increase with 50.4% and 21.2%, at fourth hour the glucose level was became less than normal with 14%. Therefore A. sativum could be a potential source for anti-diabetic when it used alone, and it is significant important to use the garlic extract alone instead of combined with Metformin drug in diabetes- treatment.

Keywords: Antagonistic, Garlic, Metformin, Synergistic

Procedia PDF Downloads 181
1513 Efficacy and Safety of Eucalyptus for Relief Cough Symptom: A Systematic Review and Meta-Analysis

Authors: Ladda Her, Juntip Kanjanasilp, Ratree Sawangjit, Nathorn Chaiyakunapruk

Abstract:

Cough is the common symptom of the respiratory tract infections or non-infections; the duration of cough indicates a classification and severity of disease. Herbal medicines can be used as the alternative to drugs for relief of cough symptoms from acute and chronic disease. Eucalyptus was used for reducing cough with evidences suggesting it has an active role in reduction of airway inflammation. The present study aims to evaluate efficacy and safety of eucalyptus for relief of cough symptom in respiratory disease. Method: The Cochrane Library, MEDLINE (PubMed), Scopus, CINAHL, Springer, Science direct, ProQuest, and THAILIS databases. From its inception until 01/02/2019 for randomized control trials. We follow for the efficacy and safety of eucalyptus for reducing cough. Methodological quality was evaluated by using the Cochrane risk of bias tool; two reviewers in our team screened eligibility and extracted data. Result: Six studies were included for the review and five studies were included in the meta-analysis, there were 1.911 persons including children (n: 1) and adult (n: 5) studies; for study in children and adult were between 1 and 80 years old, respectively. Eucalyptus was used as mono herb (n: 2) and in combination with other herbs form (n: 4). All of the studies with eucalyptus were compared for efficacy and safety with placebo or standard treatment, Eucalyptus dosage form in studies included capsules, spray, and syrup. Heterogeneity was 32.44 used random effect model (I² = 1.2%, χ² = 1.01; P-value = 0.314). The efficacy of eucalyptus was showed a reduced cough symptom statistically significant (n = 402, RR: 1.40, 95%CI [1.19, 1.65], P-value < 0.0001) when compared with placebo. Adverse events (AEs) were reported mild to moderate intensity with mostly gastrointestinal symptom. The methodological quality of the included trials was overall poor. Conclusion: Eucalyptus appears to be beneficial and safe for relieving in respiratory diseases focus on cough frequency. The evidence was inconclusive due to limited quality trial. Well-designed trials for evaluating the effectiveness in humans, the effectiveness for reducing cough symptom in human is needed. Eucalyptus had safety as monotherapy or in combination with other herbs.

Keywords: cough, eucalyptus, cineole, herbal medicine, systematic review, meta-analysis

Procedia PDF Downloads 152
1512 The Istrian Istrovenetian-Croatian Bilingual Corpus

Authors: Nada Poropat Jeletic, Gordana Hrzica

Abstract:

Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.

Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology

Procedia PDF Downloads 145
1511 A Closer Look at Inclusion-For-All Approaches to Diversity Initiative Implementation

Authors: Payton Small

Abstract:

In response to increasing demographic diversity, many U.S. organizations have implemented diversity initiatives to increase the representation of women and ethnic minorities. While these initiatives aim to promote more fair and positive outcomes for underrepresented minorities (URMs) widespread backlash against these policies can negatively impact the groups of individuals that are supposed to be supported by them. A recent theory-based analysis of best practices for instituting diversity policies proposes an "inclusion for all" approach that negotiates the oft-divergent goals and motivations of both marginalized and dominant group members in these contexts. Empirical work finds that "inclusion for all" strategies decrease White's tendency to implicitly associate diversity with exclusion and increased their personal endorsement of diversity initiatives. Similarly, Whites report higher belongingness when considering an inclusion for all approach to diversity versus a colorblind approach. While inclusion-for-all approaches may effectively increase Whites' responsiveness to diversity efforts, the downstream consequences of implementing these policies on URM's have yet to be explored. The current research investigated how inclusion-for-all diversity framing influences Whites' sensitivity to detecting discrimination against URM's as well as perceptions of reverse discrimination against Whites. Lastly, the current research looked at how URM's respond to inclusion-for-all diversity approaches. Three studies investigated the impact of inclusion-for-all diversity framing on perceptions of discrimination against Whites and URM's in a company setting. Two separate mechanisms by which exposure to an inclusion-for-all diversity statement might differentially influence perceptions of discrimination for URMs and Whites were also tested. In Studies 1 and 2, exposure to an inclusion-for-all diversity approach reduced Whites' concerns about reverse discrimination and heightened sensitivity to detecting discrimination against URM's. These effects were mediated by decreased concerns about zero-sum outcomes at the company. Study 3 found that racial minorities are concerned about increased discrimination at a company with an inclusion-for-all diversity statement and that this effect is mediated by decreased feelings of belonging at the company. In sum, companies that adopt an inclusion-for-all approach to diversity implementation reduce Whites' backlash and the negative downstream consequences associated with such backlash; however, racial minorities feel excluded and expect heightened experiences of discrimination at these same companies.

Keywords: diversity, intergroup relations, organizational social psychology, zero-sum

Procedia PDF Downloads 133
1510 Responding to and Preventing Sexual and Gender Based Violence Related to Ragging, in University of Kelaniya: A Case Study

Authors: Anuruddhi Edirisinghe, Anusha Edirisinghe, Maithree Wicramasinghe, Sagarika Kannangara, Annista Wijayanayake

Abstract:

SGBV which refer to acts of inflicting physical, mental or sexual harm or sufferings that deprive a person’s liberty based on one’s gender or sexuality is known to occur in various forms. Ragging in educational institutions can often be one such form of SGBV. Ragging related SGBV is a growing problem despite various legal, policy and programme initiatives introduced over the years. While the punishment of perpetrators through the criminal justice system is expected to bring a deterrent effect, other strategies such as awareness-raising, attitudinal changes, and the empowerment of students to say no to ragging and SGBV will lead to enlightened attitudes about the practice in universities. Thus, effective regular prevention programmes are the need of the hour. The objectives of the paper are to engage with the case of a female fresher subjected to verbal abuse, physical assault and sexual harassment due to events which started as a result of wearing a trouser to the university during the ragging season. The case came to the limelight since a complaint was made to the police and 10 students were arrested under the anti-ragging act. This led to dividend opinions among the student population and a backlash from the student union. Simultaneously, this resulted in the society demanding the stricter implementation of laws and the punishment of perpetrators. The university authority appointed a task force comprising of academics, non-academics, parents, community leaders, stakeholders and students to draw up an action plan to respond to the immediate situation as well as future prevention. The paper will also discuss the implementation of task force plan. The paper is based on interviews with those involved with the issue and the experiences of the task force members and is expected to provide an in-depth understanding of the intricacies and complications associated with dealing with a contentious problem such as ragging. Given the political and ethical issues involved with insider research as well as the sensationalism of the topic, maximum care will be taken to safeguard the interests of those concerned.

Keywords: fresher, sexual and gender based violence (SGBV), sexual harassment, ragging

Procedia PDF Downloads 236
1509 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 123
1508 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 177
1507 The Effects on Hand Function with Robot-Assisted Rehabilitation for Children with Cerebral Palsy: A Pilot Study

Authors: Fen-Ling Kuo, Hsin-Chieh Lee, Han-Yun Hsiao, Jui-Chi Lin

Abstract:

Background: Children with cerebral palsy (CP) usually suffered from mild to maximum upper limb dysfunction such as having difficulty in reaching and picking up objects, which profoundly affects their participation in activities of daily living (ADLs). Robot-assisted rehabilitation provides intensive physical training in improving sensorimotor function of the hand. Many researchers have extensively studied the effects of robot-assisted therapy (RT) for the paretic upper limb in patients with stroke in recent years. However, few studies have examined the effect of RT on hand function in children with CP. The purpose of this study is to investigate the effectiveness of Gloreha Sinfonia, a robotic device with a dynamic arm support system mainly focus on distal upper-limb training, on improvements of hand function and ADLs in children with CP. Methods: Seven children with moderate CP were recruited in this case series study. RT using Gloreha Sinfonia was performed 2 sessions per week, 60 min per session for 6 consecutive weeks, with 12 times in total. Outcome measures included the Fugl-Meyer Assessment-upper extremity (FMA-UE), the Box and Block Test, the electromyography activity of the extensor digitorum communis muscle (EDC) and brachioradialis (BR), a grip dynamometer for motor evaluation, and the ABILHAND-Kids for measuring manual ability to manage daily activities, were performed at baseline, after 12 sessions (end of treatment) and at the 1-month follow-up. Results: After 6 weeks of robot-assisted treatment of hand function, there were significant increases in FMA-UE shoulder/elbow scores (p=0.002), FMA-UE wrist/hand scores (p=0.002), and FMA-UE total scores (p=0.002). There were also significant improvements in the BR mean value (p = 0.015) and electrical agonist-antagonist muscle ratio (p=0.041) in grasping a 1-inch cube task. These gains were maintained for a month after the end of the intervention. Conclusion: RT using Gloreha Sinfonia for hand function training may contribute toward the improvement of upper extremity function and efficacy in recruiting BR muscle in children with CP. The results were maintained at one month after intervention.

Keywords: activities of daily living, cerebral palsy, hand function, robotic rehabilitation

Procedia PDF Downloads 114
1506 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool

Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles

Abstract:

Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.

Keywords: QFN packages, exposed pads, junction temperature, thermal management and measurements

Procedia PDF Downloads 256
1505 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute

Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández

Abstract:

The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.

Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge

Procedia PDF Downloads 225
1504 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 139
1503 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components

Authors: Francesca Gullo, Paola Palmero, Massimo Messori

Abstract:

Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.

Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites

Procedia PDF Downloads 54
1502 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 115
1501 Identification and Optimisation of South Africa's Basic Access Road Network

Authors: Diogo Prosdocimi, Don Ross, Matthew Townshend

Abstract:

Road authorities are mandated within limited budgets to both deliver improved access to basic services and facilitate economic growth. This responsibility is further complicated if maintenance backlogs and funding shortfalls exist, as evident in many countries including South Africa. These conditions require authorities to make difficult prioritisation decisions, with the effect that Road Asset Management Systems with a one-dimensional focus on traffic volumes may overlook the maintenance of low-volume roads that provide isolated communities with vital access to basic services. Given these challenges, this paper overlays the full South African road network with geo-referenced information for population, primary and secondary schools, and healthcare facilities to identify the network of connective roads between communities and basic service centres. This connective network is then rationalised according to the Gross Value Added and number of jobs per mesozone, administrative and functional road classifications, speed limit, and road length, location, and name to estimate the Basic Access Road Network. A two-step floating catchment area (2SFCA) method, capturing a weighted assessment of drive-time to service centres and the ratio of people within a catchment area to teachers and healthcare workers, is subsequently applied to generate a Multivariate Road Index. This Index is used to assign higher maintenance priority to roads within the Basic Access Road Network that provide more people with better access to services. The relatively limited incidence of Basic Access Roads indicates that authorities could maintain the entire estimated network without exhausting the available road budget before practical economic considerations get any purchase. Despite this fact, a final case study modelling exercise is performed for the Namakwa District Municipality to demonstrate the extent to which optimal relocation of schools and healthcare facilities could minimise the Basic Access Road Network and thereby release budget for investment in roads that best promote GDP growth.

Keywords: basic access roads, multivariate road index, road prioritisation, two-step floating catchment area method

Procedia PDF Downloads 231
1500 The Effect of Parathyroid Hormone on Aldosterone Secretion in Patients with Primary Hyperparathyroidism

Authors: Branka Milicic Stanic, Romana Mijovic

Abstract:

In primary hyperparathyroidism, an increased risk of developing cardiovascular disease may exist due to increased activity of the renin-angiotensin-aldosterone system (RAAS). In adenomatous altered tissue of parathyroid gland, compared to normal tissue, there are two to fourfold increase in the expression of type 1 angiotensin II receptors. As there is a clear evidence of the independent role of aldosterone on the cardiovascular system, the aim of this study was to evaluate the existence of an association between aldosterone secretion and parathyroid hormone in patients with primary hyperparathyroidism. This study included 48 patients with elevated parathyroid hormone who had come to the Departement of Nuclear Medicine, Clinical Center of Vojvodina, for Parathyroid Scintigraphy. The control group consisted of 30 healthy subjects who matched age and gender to the study group. All the results were statistically processed by statistical package STATISTICA 14 (Statsoft Inc,Tulsa, OK, USA). The survey was conducted between February 2017 and April 2018 at the Departement of Nuclear Medicine and at the Departement for Endocinology Diagnoistics, in Clinical Center of Vojvodina, Novi Sad. Compared to the control group, the study group had statistically significantly higher values of aldosterone (p=0.028), total calcium (p=0.01), ionized calcium (p=0.003) and parathyroid hormone (N-TACT PTH) (p=0.00), while statistically a significant lower levels in the study group were for phosphorus (p=0.003) and vitamin D (p=0.04). A linear correlation analysis in the study group revealed a statistically significant degree of positive correlation between renin and N-TACT PTH (r=0.688, p<0.05); renin and calcium (r=0.673, p<0.05) and renin and ionized calcium (r=0.641, p<0.05). Serum aldosterone and parathyroid hormone levels (N-TACT) were correlated positively in patients with primary hyperparathyroidism (r=0.509, p<0.05). According to the linear correlation analysis in the control group, aldosterone showed no positive correlation with N-TACT PTH (r=-0.285, p>0.05), as well as total and ionized calcium (r=-0.200, p>0.05; r=-0.313, p>0.05). In multivariate regression analysis of the study group, the strongest predictive variable of aldosterone secretion was N-TACT PTH (p=0.011). Aldosterone correlated positively to PTH levels in patients with primary hyperparathyroidism, and the fact is that in these patients aldosterone might be a key mediator of cardiovascular symptoms. All this knowledge should help to find new treatments to prevent cardiovascular disease.

Keywords: aldosterone, hyperparathyroidism, parathyroid hormone, parathyroid gland

Procedia PDF Downloads 140
1499 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 386
1498 Effects of Post-sampling Conditions on Ethanol and Ethyl Glucuronide Formation in the Urine of Diabetes Patients

Authors: Hussam Ashwi, Magbool Oraiby, Ali Muyidi, Hamad Al-Oufi, Mohammed Al-Oufi, Adel Al-Juhani, Salman Al-Zemaa, Saeed Al-Shahrani, Amal Abuallah, Wedad Sherwani, Mohammed Alattas, Ibraheem Attafi

Abstract:

Ethanol must be accurately identified and quantified to establish their use and contribution in criminal cases and forensic medicine. In some situations, it may be necessary to reanalyze an old specimen; therefore, it is essential to comprehend the effect of storage conditions and how long the result of a reanalyzed specimen can be reliable and reproducible. Additionally, ethanol can be produced via multiple in vivo and in vitro processes, particularly in diabetic patients, and the results can be affected by storage conditions and time. In order to distinguish between in vivo and in vitro alcohol generation in diabetes patient urine samples, various factors should be considered. This study identifies and quantifies ethanol and EtG in diabetic patients' urine samples stored in two different settings over time. Ethanol levels were determined using gas chromatography-headspace (GC-HS), and ethyl glucuronide (EtG) levels were determined using the immunoassay (RANDOX) technique. Ten urine specimens were collected and placed in a standard container. Each specimen was separated into two containers. The specimens were divided into two groups: those kept at room temperature (25 °C) and those kept cold (2-8 °C). Ethanol and EtG levels were determined serially over a two-week period. Initial results showed that none of the specimens tested positive for ethanol or EtG. At room temperature (15-25 °C), 7 and 14 days after the sample was taken, the average concentration of ethanol increased from 1.7 mg/dL to 2 mg/dL, and the average concentration of EtG increased from 108 ng/mL to 186 ng/mL. At 2–8 °C, the average ethanol concentration was 0.4 and 0.5 mg/dL, and the average EtG concentration was 138 and 124 ng/mL seven and fourteen days after the sample was collected, respectively. When ethanol and EtG levels were determined 14 days post collection, they were considerably lower than when stored at room temperature. A considerable increase in EtG concentrations (14-day range 0–186 ng/mL) is produced during room-temperature storage, although negative initial results for all specimens. Because EtG might be produced after a sampling collection, it is not a reliable indicator of recent alcohol consumption. Given the possibility of misleading EtG results due to in vitro EtG production in the urine of diabetic patients.

Keywords: ethyl glucuronide, ethanol, forensic toxicology, diabetic

Procedia PDF Downloads 123
1497 Investigation Two Polymorphism of hTERT Gene (Rs 2736098 and Rs 2736100) and miR- 146a rs2910164 Polymorphism in Cervical Cancer

Authors: Hossein Rassi, Alaheh Gholami Roud-Majany, Zahra Razavi, Massoud Hoshmand

Abstract:

Cervical cancer is multi step disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN)and cervical cancer. In other hand, some of hTERT and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of hTERT genotypes and miR-146a genotypes in cervical cancer. Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of hTERT and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, hTERT ( rs 2736098) GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical cancer in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of hTERT rs 2736098 genotypes and miR-146a rs2910164 genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.

Keywords: polymorphism of hTERT gene, miR-146a rs2910164 polymorphism, cervical cancer, virus

Procedia PDF Downloads 321
1496 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 257
1495 Transformational Leadership and Self-Efficacy of Academic Heads in the Implementation of a Customized English Language Curriculum

Authors: Sonia Arradaza-Pajaron

Abstract:

This study examined the relationship between transformational leadership (TL) and self-efficacy (SE) of academic heads in the implementation of a customized English language curriculum (CELC) among technological state universities and colleges in Leyte provinces and Biliran, Philippines. Results manifested that academic leaders practiced transformational leadership and are self-efficacious enough but with only moderate level in the effectiveness of CELC implementation. It was found out; further, that of the four identified transformational leadership components, except idealized influence, three of which demonstrated a significant relationship with CELC component variables, although in varying degree. Moreover, self-efficacy sources, especially vicarious experiences and verbal persuasion manifested moderate to high significant relationships with effective CELC curriculum implementation. Further, verbal persuasion and physiological/emotional condition manifested significant relationship with CELC-resource and CELC-contextual/community influence, respectively. Regression analysis showed that TL-individualized consideration component explained wider extent when correlated with CELC contextual/community components, while self-efficacy source-verbal persuasion demonstrated a wider extent with the three CELC components, namely; resource, process and physiological/emotional condition. Results further revealed that TL-individualized consideration manifested lesser influence with CELC implementation, while SE-verbal persuasion demonstrated stronger influence or effect on CELC-process, CELC-physiological/emotional, while lesser influence with CELC-resource. This implies that academic leaders, in order to carry out effective curriculum implementation, should provide more emphasis on school culture, its beliefs, practices and academic atmosphere but most of all empower human resources who are considered the backbone of the work place and can be directly affected by any curriculum shifts and challenges. To realize this, more values-skilled training programs must be designed for academic heads are needed to equip them with the necessary leadership skills, beliefs in their capacity to lead and their own enhance emotional well-being in leading subordinates and facilitating curriculum implementation.

Keywords: Customized English Language curriculum, CELC, self-efficacy, transformational leadership, values-skilled training

Procedia PDF Downloads 122
1494 Desirable Fatty Acids in Meat of Cattle Fed Different Levels of Lipid-Based Diets

Authors: Tiago N. P. Valente, Erico S. Lima, Roberto O. Roça

Abstract:

Introduction: Research has stimulated animal production studies on solutions to decrease the level of saturated fatty acids and increase unsaturated in foods of animal origin. The objective of this study was to determine the effect of the dietary inclusion of lipid-based diets on the fatty acid profiles from finishing cattle. Materials and Methods: The study was carried out in the Chapéu de Couro Farm in Aguaí/SP, Brazil. A group of 39 uncastrated Nellore cattle. Mean age of the animals was 36 months, and initial mean live weight was 494.1 ± 10.1. Animals were randomly assigned to one of three treatments, based on dry matter: feed with control diet 2.50% cottonseed, feed with 11.50% cottonseed, and feed with 3.13% cottonseed added of 1.77% protected lipid. Forage:concentrate ratio was 50:50 on a dry matter basis. Sugar cane chopped was used as forage. After 63 days mean final live weight was 577.01 kg ± 11.34. After slaughter, carcasses were identified and divided into two halves that were kept in a cold chamber for 24 hours at 2°C. Then, part of the M. longissimus thoracis of each animal was removed between the 12th and 13th rib of the left half carcass. The samples steaks were 2.5 cm thick and were identified and stored frozen in a freezer at -18°C. The analysis of methyl esters of fatty acids was carried out in a gas chromatograph. Desirable fatty acids (FADes) were determined by the sum of unsaturated fatty acids and stearic acid (C18:0). Results and Discussion: No differences (P>0.05) between the diets for the proportion of FADes in the meat of the animals in this study, according to the lipid sources used. The inclusion of protected fat or cottonseed in the diet did not change the proportion of FADes in the meat. The proportion mean of FADes in meat in the present study were: as pentadecanoic acid (C15:1 = 0.29%), palmitoleic acid (C16:1 = 4.26%), heptadecanoic acid (C17:1 = 0.07%), oleic acid (C18:1n9c = 37.32%), γ-linolenic acid (0.94%) and α-linolenic acid (1.04%), elaidic acid (C18:1n9t = 0.50%), eicosatrienoic acid (C20:3n3 = 0.03%), eicosapentaenoic acid (C20:5n3 = 0.04%), erucic acid (C22:1n9 = 0.89%), docosadienoic acid (C22:2 = 0.04%) and stearic acid (C18:0 = 21.53%). Conclusions: The add the cottonseed or protected lipid in diet is not affected fatty acids profiles the desirable fatty acids in meat. Acknowledgements: IFGoiano, FAPEG and CNPq (Brazil).

Keywords: beef quality, cottonseed, protected fat, unsaturated fatty acids

Procedia PDF Downloads 291
1493 Effect of Tissue Preservation Chemicals on Decomposition in Different Soil Types

Authors: Onyekachi Ogbonnaya Iroanya, Taiye Abdullahi Gegele, Frank Tochukwu Egwuatu

Abstract:

Introduction: Forensic taphonomy is a multifaceted area that incorporates decomposition, chemical and biological cadaver exposure in post-mortem event chronology and reconstruction to predict the Post Mortem Interval (PMI). The aim of this study was to evaluate the integrity of DNA extracted from the remains of embalmed decomposed Sus domesticus tissues buried in different soil types. Method: A total of 12 limbs of Sus domesticus weighing between 0.7-1.4 kg were used. Each of the samples across the groups was treated with 10% formaldehyde, absolute methanol and 50% Pine oil for 24 hours before burial except the control samples, which were buried immediately. All samples were buried in shallow simulated Clay, Sandy and Loamy soil graves for 12 months. The DNA for each sample was extracted and quantified with Nanodrop Spectrophotometer (6305 JENWAY spectrometers). The rate of decomposition was examined through the modified qualitative decomposition analysis. Extracted DNA was amplified through PCR and bands visualized via gel electrophoresis. A biochemical enzyme assay was done for each burial grave soil. Result: The limbs in all burial groups had lost weight over the burial period. There was a significant increase in the soil urease level in the samples preserved in formaldehyde across the 3 soil type groups (p≤0.01). Also, the control grave soils recorded significantly higher alkaline phosphatase, dehydrogenase and calcium carbonate values compared to experimental grave soils (p≤0.01). The experimental samples showed a significant decrease in DNA concentration and purity when compared to the control groups (p≤0.01). Obtained findings of the soil biochemical analysis showed the embalming treatment altered the relationship between organic matter decomposition and soil biochemical properties as observed in the fluctuations that were recorded in the soil biochemical parameters. The PCR amplified DNA showed no bands on the gel electrophoresis plates. Conclusion: In criminal investigations, factors such as burial grave soil, grave soil biochemical properties, antemortem exposure to embalming chemicals should be considered in post-mortem interval (PMI) determination.

Keywords: forensic taphonomy, post-mortem interval (PMI), embalmment, decomposition, grave soil

Procedia PDF Downloads 166