Search results for: spectral domain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2473

Search results for: spectral domain

253 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.

Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.

Procedia PDF Downloads 18
252 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 69
251 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 206
250 The Effect of Nanotechnology Structured Water on Lower Urinary Tract Symptoms in Men with Benign Prostatic Hyperplasia: A Double-Blinded Randomized Study

Authors: Ali Kamal M. Sami, Safa Almukhtar, Alaa Al-Krush, Ismael Hama-Amin Akha Weas, Ruqaya Ahmed Alqais

Abstract:

Introduction and Objectives Lower urinary tract symptoms (LUTS) are common among men with benign prostatic hyperplasia (BPH). The combination of 5 alpha-reductase inhibitors and alpha-blockers has been used as a conservative treatment of male LUTS secondary to BPH. Nanotechnology structured water magnalife is a type of water that is produced by modulators and specific frequency and energy fields that transform ordinary water into this Nanowater. In this study, we evaluated the use of Nano-water with the conservative treatment and to see if it improves the outcome and gives better results in those patients with LUTS/BPH. Material and methods For a period of 3 months, 200 men with International Prostate Symptom Score (IPSS)≥13, maximum flow rate (Qmax)≤ 15ml/s, and prostate volume > 30 and <80 ccs were randomly divided into two groups. Group A 100 men were given Nano-water with the (tamsulosindutasteride) and group B 100 men were given ordinary bottled water with the (tamsulosindutasteride). The water bottles were unlabeled and were given in a daily dose of 20ml/kg body weight. Dutasteride 0.5mg and tamsulosin 0.4 mg daily doses. Both groups were evaluated for the IPSS, Qmax, Residual Urine (RU), International Index of Erectile Function–Erectile Function (IIEF-EF) domain at the beginning (baseline data), and at the end of the 3 months. Results Of the 200 men with LUTS who were included in this study, 193 men were followed, and 7 men dropped out of the study for different reasons. In group A which included 97 men with LUTS, IPSS decreased by 16.82 (from 20.47 to 6.65) (P<0.00001) and Qmax increased by 5.73 ml/s (from 11.71 to 17.44) (P<0.00001) and RU <50 ml in 88% of patients (P<0.00001) and IIEF-EF increased to 26.65 (from 16.85) (P<0.00001). While in group B, 96 men with LUTS, IPSS decreased by 8.74(from 19.59 to 10.85)(P<0.00001) and Qmax increased by 4.67 ml/s(from 10.74 to 15.41)(P<0.00001), RU<50 ml in 75% of patients (P<0.00001), and IIEF-EF increased to 21(from 15.87)(P<0.00001). Group A had better results than group B. IPSS in group A decreased to 6.65 vs 10.85 in group B(P<0.00001), also Qmax increased to 17.44 in group A vs 15.41 in group B(P<0.00001), group A had RU <50 ml in 88% of patients vs 75% of patients in group B(P<0.00001).Group A had better IIEF-EF which increased to 26.65 vs 21 in group B(P<0.00001). While the differences between the baseline data of both groups were statistically not significant. Conclusion The use of nanotechnology structured water magnalife gives a better result in terms of LUTS and scores in patients with BPH. This combination is showing improvements in IPSS and even in erectile function in those men after 3 months.

Keywords: nano water, lower urinary tract symptoms, benign prostatic hypertrophy, erectile dysfunction

Procedia PDF Downloads 72
249 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 108
248 Weal: The Human Core of Well-Being as Attested by Social and Life Sciences

Authors: Gyorgy Folk

Abstract:

A finite set of cardinal needs define the human core of living well shaped on the evolutionary time scale as attested by social and life sciences of the last decades. Well-being is the purported state of living well. Living of humans akin any other living beings involves the exchange of vital substance with nature, maintaining a supportive symbiosis with an array of other living beings, living up to bonds to kin and exerting efforts to sustain living. A supportive natural environment, access to material resources, the nearness to fellow beings, and life sustaining activity are prerequisites of well-being. Well-living is prone to misinterpretation as an individual achievement, one lives well only and only if bonded to human relationships, related to a place, incorporated in nature. Akin all other forms of it, human life is a self-sustaining arrangement. One may say that the substance of life is life, and not materials, products, and services converted into life. The human being remains shaped on an evolutionary time scale and is enabled within the non-altering core of human being, invariant of cultural differences in earthly space and time. Present paper proposes the introduction of weal, the missing link in the causal chain of societal performance and the goodness of life. Interpreted differently over the ages, cultures and disciplines, instead of well-being, the construct in general use, weal is proposed as the underlying foundation of well-being. Weal stands for the totality of socialised reality as framing well-being for the individual beyond the possibility of deliberate choice. The descriptive approach to weal, mapping it under the guidance of discrete scientific disciplines reveals a limited set of cardinal aspects, labeled here the cardinal needs. Cardinal expresses the fundamental reorientation weal can bring about, needs deliver the sense of sine qua non. Weal is conceived as a oneness mapped along eight cardinal needs. The needs, approached as aspects instead of analytically isolated factors do not require mutually exclusive definitions. To serve the purpose of reorientation, weal is operationalised as a domain in multidimensional space, each dimension encompassing an optimal level of availability of the fundamental satisfiers between the extremes of drastic insufficiency and harmful excess, ensured by actual human effort. Weal seeks balance among the material and social aspects of human being while allows for cultural and individual uniqueness in attaining human flourishing.

Keywords: human well-being, development, economic theory, human needs

Procedia PDF Downloads 227
247 Multilingualism in Medieval Romance: A French Case Study

Authors: Brindusa Grigoriu

Abstract:

Inscribing itself in the field of the history of multilingual communities with a focus on the evolution of language didactics, our paper aims at providing a pragmatic-interactional approach on a corpus proposing to scholars of the international scientific community a relevant text of early modern European literature: the first romance in French, The Conte of Flore and Blanchefleur by Robert d’Orbigny (1150). The multicultural context described by the romance is one in which an Arab-speaking prince, Floire, and his Francophone protégée, Blanchefleur, learn Latin together at the court of Spain and become fluent enough to turn it into the language of their love. This learning process is made up of interactional patterns of affective relevance, in which the proficiency of the protagonists in the domain of emotive acts becomes a matter of linguistic and pragmatic emulation. From five to ten years old, the pupils are efficiently stimulated by their teacher of Latin, Gaidon – a Moorish scholar of the royal entourage – to cultivate their competencies of oral expression and reading comprehension (of Antiquity classics), while enjoying an ever greater freedom of written expression, including the composition of love poems in this second language of culture and emotional education. Another relevant parameter of the educational process at court is that Latin shares its prominent role as a language of culture with French, whose exemplary learner is the (Moorish) queen herself. Indeed, the adult 'First lady' strives to become a pupil benefitting from lifelong learning provided by a fortuitous slave-teacher with little training, her anonymous chambermaid and Blanchefleur’s mother, who, despite her status of a war trophy, enjoys her Majesty’s confidence as a cultural agent of change in linguistic and theological fields. Thus, the two foreign languages taught at Spains’s court, Latin and French – as opposed to Arabic -, suggest a spiritual authority allowing the mutual enrichment of intercultural pioneers of cross-linguistic communication, in the aftermath of religious wars. Durably, and significantly – if not everlastingly – the language of physical violence rooted in intra-cultural solipsism is replaced by two Romance languages which seem to embody, together and yet distinctly, the parlance of peace-making.

Keywords: multilingualism, history of European language learning, French and Latin learners, multicultural context of medieval romance

Procedia PDF Downloads 139
246 Analysis of Vibration and Shock Levels during Transport and Handling of Bananas within the Post-Harvest Supply Chain in Australia

Authors: Indika Fernando, Jiangang Fei, Roger Stanley, Hossein Enshaei

Abstract:

Delicate produce such as fresh fruits are increasingly susceptible to physiological damage during the essential post-harvest operations such as transport and handling. Vibration and shock during the distribution are identified factors for produce damage within post-harvest supply chains. Mechanical damages caused during transit may significantly diminish the quality of fresh produce which may also result in a substantial wastage. Bananas are one of the staple fruit crops and the most sold supermarket produce in Australia. It is also the largest horticultural industry in the state of Queensland where 95% of the total production of bananas are cultivated. This results in significantly lengthy interstate supply chains where fruits are exposed to prolonged vibration and shocks. This paper is focused on determining the shock and vibration levels experienced by packaged bananas during transit from the farm gate to the retail market. Tri-axis acceleration data were captured by custom made accelerometer based data loggers which were set to a predetermined sampling rate of 400 Hz. The devices recorded data continuously for 96 Hours in the interstate journey of nearly 3000 Km from the growing fields in far north Queensland to the central distribution centre in Melbourne in Victoria. After the bananas were ripened at the ripening facility in Melbourne, the data loggers were used to capture the transport and handling conditions from the central distribution centre to three retail outlets within the outskirts of Melbourne. The quality of bananas were assessed before and after transport at each location along the supply chain. Time series vibration and shock data were used to determine the frequency and the severity of the transient shocks experienced by the packages. Frequency spectrogram was generated to determine the dominant frequencies within each segment of the post-harvest supply chain. Root Mean Square (RMS) acceleration levels were calculated to characterise the vibration intensity during transport. Data were further analysed by Fast Fourier Transform (FFT) and the Power Spectral Density (PSD) profiles were generated to determine the critical frequency ranges. It revealed the frequency range in which the escalated energy levels were transferred to the packages. It was found that the vertical vibration was the highest and the acceleration levels mostly oscillated between ± 1g during transport. Several shock responses were recorded exceeding this range which were mostly attributed to package handling. These detrimental high impact shocks may eventually lead to mechanical damages in bananas such as impact bruising, compression bruising and neck injuries which affect their freshness and visual quality. It was revealed that the frequency range between 0-5 Hz and 15-20 Hz exert an escalated level of vibration energy to the packaged bananas which may result in abrasion damages such as scuffing, fruit rub and blackened rub. Further research is indicated specially in the identified critical frequency ranges to minimise exposure of fruits to the harmful effects of vibration. Improving the handling conditions and also further study on package failure mechanisms when exposed to transient shock excitation will be crucial to improve the visual quality of bananas within the post-harvest supply chain in Australia.

Keywords: bananas, handling, post-harvest, supply chain, shocks, transport, vibration

Procedia PDF Downloads 190
245 A Study of Predicting Judgments on Causes of Online Privacy Invasions: Based on U.S Judicial Cases

Authors: Minjung Park, Sangmi Chai, Myoung Jun Lee

Abstract:

Since there are growing concerns on online privacy, enterprises could involve various personal privacy infringements cases resulting legal causations. For companies that are involving online business, it is important for them to pay extra attentions to protect users’ privacy. If firms can aware consequences from possible online privacy invasion cases, they can more actively prevent future online privacy infringements. This study attempts to predict the probability of ruling types caused by various invasion cases under U.S Personal Privacy Act. More specifically, this research explores online privacy invasion cases which was sentenced guilty to identify types of criminal punishments such as penalty, imprisonment, probation as well as compensation in civil cases. Based on the 853 U.S judicial cases ranged from January, 2000 to May, 2016, which related on data privacy, this research examines the relationship between personal information infringements cases and adjudications. Upon analysis results of 41,724 words extracted from 853 regal cases, this study examined online users’ privacy invasion cases to predict the probability of conviction for a firm as an offender in both of criminal and civil law. This research specifically examines that a cause of privacy infringements and a judgment type, whether it leads a civil or criminal liability, from U.S court. This study applies network text analysis (NTA) for data analysis, which is regarded as a useful method to discover embedded social trends within texts. According to our research results, certain online privacy infringement cases caused by online spamming and adware have a high possibility that firms are liable in the case. Our research results provide meaningful insights to academia as well as industry. First, our study is providing a new insight by applying Big Data analytics to legal cases so that it can predict the cause of invasions and legal consequences. Since there are few researches applying big data analytics in the domain of law, specifically in online privacy, this study suggests new area that future studies can explore. Secondly, this study reflects social influences, such as a development of privacy invasion technologies and changes of users’ level of awareness of online privacy on judicial cases analysis by adopting NTA method. Our research results indicate that firms need to improve technical and managerial systems to protect users’ online privacy to avoid negative legal consequences.

Keywords: network text analysis, online privacy invasions, personal information infringements, predicting judgements

Procedia PDF Downloads 229
244 Exploring the Nexus of Gastronomic Tourism and Its Impact on Destination Image

Authors: Usha Dinakaran, Richa Ganguly

Abstract:

Gastronomic tourism has evolved into a prominent niche within the travel industry, with tourists increasingly seeking unique culinary experiences as a primary motivation for their journeys. This research explores the intricate relationship between gastronomic tourism and its profound influence on the overall image of travel destinations. It delves into the multifaceted aspects of culinary experiences, tourists' perceptions, and the preservation of cultural identity, all of which play pivotal roles in shaping a destination's image. The primary aim of this study is to comprehensively examine the interplay between gastronomy and tourism, specifically focusing on its impact on destination image. The research seeks to achieve the following objectives: (1) Investigate how tourists perceive and engage with gastronomic tourism experiences. (2) Understand the significance of food in shaping the tourism image. (3.) Explore the connection between gastronomy and the destination's cultural identity Quantify the relationship between tourists' engagement in co-creation activities related to gastronomic tourism and their overall satisfaction with the quality of their culinary experiences. To achieve these objectives, a mixed-method research approach will be employed, including surveys, interviews, and content analysis. Data will be collected from tourists visiting diverse destinations known for their culinary offerings. This research anticipates uncovering valuable insights into the nexus between gastronomic tourism and destination image. It is expected to shed light on how tourists' perceptions of culinary experiences impact their overall perception of a destination. Additionally, the study aims to identify factors influencing tourist satisfaction and how cultural identity is preserved and promoted through gastronomic tourism. The findings of this research hold practical implications for destination marketers and stakeholders. Understanding the symbiotic relationship between gastronomy and tourism can guide the development of more targeted marketing strategies. Furthermore, promoting co-creation activities can enhance tourists' culinary experiences and contribute to the positive image of destinations.This study contributes to the growing body of knowledge regarding gastronomic tourism by consolidating insights from various studies and offering a comprehensive perspective on its impact on destination image. It offers a platform for future research in this domain and underscores the importance of culinary experiences in contemporary travel. In conclusion, this research endeavors to illuminate the dynamic interplay between gastronomic tourism and destination image, providing valuable insights for both academia and industry stakeholders in the field of tourism and hospitality.

Keywords: gastronomy, tourism, destination image, culinary

Procedia PDF Downloads 74
243 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.

Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein

Procedia PDF Downloads 256
242 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 108
241 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization

Authors: Jessica Gu, Yu Chen

Abstract:

Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.

Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution

Procedia PDF Downloads 241
240 The Reasons for Failure in Writing Essays: Teaching Writing as a Project-Based Enterprise

Authors: Ewa Toloczko

Abstract:

Studies show that developing writing skills throughout years of formal foreign language instruction does not necessarily result in rewarding accomplishments among learners, nor an affirmative attitude they build towards written assignments. What causes this apparently wide-spread bias to writing might be a diminished relevance students attach to it, as opposed to the other productive skill — speaking, insufficient resources available for them to succeed, or the ways writing is approached by instructors, that is inapt teaching techniques that discourage rather that inflame learners’ engagement. The assumption underlying this presentation is that psychological and psycholinguistic factors constitute a key dimension of every writing process, and hence should be seriously considered in both material design and lesson planning. The author intends to demonstrate research in which writing tasks were conceived of as attitudinal rather than technical operations, and consequently turned into meaningful and socially-oriented incidents that students could relate to and have an active hand in. The instrument employed to achieve this purpose and to make writing even more interactive was the format of a project, a carefully devised series of tasks, which involved students as human beings, not only language learners. The projects rested upon the premise that the presence of peers and the teacher in class could be taken advantage of in a supportive rather than evaluative mode. In fact, the research showed that collaborative work and constant meaning negotiation reinforced not only bonds between learners, but also the language form and structure of the output. Accordingly, the role of the teacher shifted from the assessor to problem barometer, always ready to accept the slightest improvements in students’ language performance. This way, written verbal communication, which usually aims to merely manifest accuracy and coherent content for assessment, became part of the enterprise meant to emphasise its social aspect — the writer in real-life setting. The samples of projects show the spectrum of possibilities teachers have when exploring the domain of writing within school curriculum. The ideas are easy to modify and adjust to all proficiency levels and ages. Initially, however, they were meant to suit teenage and young adult learners of English as a foreign language in both European and Asian contexts.

Keywords: projects, psycholinguistic/ psychological dimension of writing, writing as a social enterprise, writing skills, written assignments

Procedia PDF Downloads 234
239 Establishing the Legality of Terraforming under the Outer Space Treaty

Authors: Bholenath

Abstract:

Ever since Elon Musk revealed his plan to terraform Mars on national television in 2015, the debate regarding the legality of such an activity under the current Outer Space Treaty regime is gaining momentum. Terraforming means to alter or transform the atmosphere of another planet to have the characteristics of landscapes on Earth. Musk’s plan is to alter the entire environment of Mars so as to make it habitable for humans. He has long been an advocate of colonizing Mars, and in order to make humans an interplanetary species; he wants to detonate thermonuclear devices over the poles of Mars. For a common man, it seems to be a fascinating endeavor, but for space lawyers, it poses new and fascinating legal questions. Some of the questions which arise are whether the use of nuclear weapons on celestial bodies is permitted under the Outer Space Treaty? Whether such an alteration of the celestial environment would fall within the scope of the term 'harmful contamination' under Article IX of the treaty? Whether such an activity which would put an entire planet under the control of a private company can be permitted under the treaty? Whether such terraforming of Mars would amount to its appropriation? Whether such an activity would be in the 'benefit and interests of all countries'? This paper will be attempt to examine and elucidate upon these legal questions. Space is one such domain where the law should precede man. The paper follows the approach that the de lege lata is not capable of prohibiting the terraforming of Mars. Outer Space Treaty provides the freedoms of space and prescribes certain restrictions on those freedoms as well. The author shall examine the provisions such as Article I, II, IV, and IX of the Outer Space Treaty in order to establish the legality of terraforming activity. The author shall establish how such activity is peaceful use of the celestial body, is in the benefit and interests of all countries, and does neither qualify as national appropriation of the celestial body nor as its harmful contamination. The author shall divide the paper into three chapters. The first chapter would be about the general introduction of the problem, the analysis of Elon Musk’s plan to terraform Mars, and the need to study terraforming from the lens of the Outer Space Treaty. In the second chapter, the author shall attempt to establish the legality of the terraforming activity under the provisions of the Outer Space Treaty. In this vein, the author shall put forth the counter interpretations and the arguments which may be formulated against the lawfulness of terraforming. The author shall show as to why the counter interpretations establishing the unlawfulness of terraforming should not be accepted, and in doing so, the author shall provide the interpretations that should prevail and ultimately establishes the legality of terraforming activity under the treaty. In the third chapter, the author shall draw relevant conclusions and give suggestions.

Keywords: appropriation, harmful contamination, peaceful, terraforming

Procedia PDF Downloads 153
238 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation

Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode

Abstract:

The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.

Keywords: construction project management, construction performance, incident analysis, systems thinking

Procedia PDF Downloads 131
237 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 225
236 The Importance of Clinical Pharmacy and Computer Aided Drug Design

Authors: Mario Hanna Louis Hanna

Abstract:

The use of CAD (pc Aided layout) generation is ubiquitous inside the structure, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of structure faculties in Nigeria as an important part of the training module. This newsletter examines the moral troubles involved in implementing CAD (pc Aided layout) content into the architectural training curriculum. Using current literature, this study begins with the advantages of integrating CAD into architectural education and the responsibilities of various stakeholders in the implementation process. It also examines issues related to the terrible use of records generation and the perceived bad effect of CAD use on design creativity. The use of a survey technique, information from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli college changed into accumulated to serve as a case observe on how the problems raised have been being addressed. The object draws conclusions on what guarantees a hit moral implementation. Tens of millions of human beings around the sector suffer from hepatitis C, one of the international's deadliest sicknesses. Interferon (IFN) is a remedy alternative for patients with hepatitis C, but these treatments have their aspect outcomes. Our research targeted growing an oral small molecule drug that goals hepatitis C virus (HCV) proteins and has fewer facet effects. Our contemporary study targets to broaden a drug primarily based on a small molecule antiviral drug precise for the hepatitis C virus (HCV). Drug improvement and the use of laboratory experiments isn't always best high-priced, however also time-eating to behavior those experiments. instead, on this in silicon have a look at, we used computational strategies to recommend a particular antiviral drug for the protein domain names of discovered in the hepatitis C virus. This examines used homology modeling and abs initio modeling to generate the 3-D shape of the proteins, then figuring out pockets within the proteins. Proper lagans for pocket pills were advanced the usage of the de novo drug design method. Pocket geometry is taken into consideration while designing ligands. A few of the various lagans generated, a different for each of the HCV protein domains has been proposed.

Keywords: drug design, anti-viral drug, in-silicon drug design, Hepatitis C virus (HCV) CAD (Computer Aided Design), CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication.

Procedia PDF Downloads 27
235 Music in Religion Culture of the Georgian Pentecostals

Authors: Nino Naneishvili

Abstract:

The study of religious minorities and their musical culture has attracted scant academic attention in Georgia. Within wider Georgian society, it would seem that the focus of discourse to date has been on the traditional orthodox religion and its musical expression, with other forms of religious expression regarded as intrinsically less valuable. The goal of this article is to study Georgia's different religious and musical picture which, this time, is presented on the example of the Pentecostals. The first signs of the Pentecostal movement originated at the end of the 19th Century in the USA, and first appeared in Georgia as early as 1914. An ethnomusicological perspective allows the use of anthropological and sociological approaches. The basic methodology is an ethnographic method. This involved attending religious services, observation, in-depth interviews and musical material analysis. This analysis, based on a combined use of various theoretical and methodological approaches, reveals that Georgian Pentecostals, apart from polyphonic singing, are characterised by “ bi-musicality.“ This phenomenon together with Georgian three part polyphony combines vocalisation within “social polyphony.“ The concept of back stage and front stage is highlighted. Chanters also try to express national identity. In some cases however it has been observed that they abandon or conceal certain musical forms of expression which are considered central to Georgian identity. The famous hymn “Thou art a Vineyard” is a case in point. The reason given for this omission within the Georgian Pentecostal church is that within Pentecostal doctrine, God alone is the object of worship. Therefore there is no veneration of Saints as representatives of the Divine. In some cases informants denied the existence of this hymn, and others explain that the meaning conveyed to the Vineyard is that of Jesus Christ and not the Virgin Mary. Others stated that they loved Virgin Mary and were therefore free to sing this song outside church circles. The results of this study illustrates that one of the religious minorities in Georgia, the Pentecostals, are characterised by a deviation in musical thinking from Homo Polyphonicus. They actively change their form of musical worship to secondary ethno hearing – bi-musicality. This outcome is determined by both new religious thinking and the process of globalization. A significant principle behind this form of worship is the use of forms during worship which are acceptable and accessible to all. This naturally leads to the development of modern forms. Obtained material does not demonstrate a connection between traditional religious music in general. Rather, it constitutes an independent domain.

Keywords: Georgia, globalization, music, pentecostal

Procedia PDF Downloads 324
234 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 211
233 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting

Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava

Abstract:

A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.

Keywords: selective laser melting, graphene, composite, mechanical property, tribological property

Procedia PDF Downloads 136
232 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London

Authors: Michael Adebayo, Saheed Lawal

Abstract:

The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.

Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality

Procedia PDF Downloads 85
231 Insulin Receptor Substrate-1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) Gene Polymorphisms Associated with Type 2 Diabetes Mellitus in Eritreans

Authors: Mengistu G. Woldu, Hani Y. Zaki, Areeg Faggad, Badreldin E. Abdalla

Abstract:

Background: Type 2 diabetes mellitus (T2DM) is a complex, degenerative, and multi-factorial disease, which is culpable for huge mortality and morbidity worldwide. Even though relatively significant numbers of studies are conducted on the genetics domain of this disease in the developed world, there is huge information gap in the sub-Saharan Africa region in general and in Eritrea in particular. Objective: The principal aim of this study was to investigate the association of common variants of the Insulin Receptor Substrate 1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) genes with T2DM in the Eritrean population. Method: In this cross-sectional case control study 200 T2DM patients and 112 non-diabetes subjects were participated and genotyping of the IRS1 (rs13431179, rs16822615, 16822644rs, rs1801123) and TCF7L2 (rs7092484) tag SNPs were carries out using PCR-RFLP method of analysis. Haplotype analyses were carried out using Plink version 1.07, and Haploview 4.2 software. Linkage disequilibrium (LD), and Hardy-Weinberg equilibrium (HWE) analyses were performed using the Plink software. All descriptive statistical data analyses were carried out using SPSS (Version-20) software. Throughout the analysis p-value ≤0.05 was considered statistically significant. Result: Significant association was found between rs13431179 SNP of the IRS1 gene and T2DM under the recessive model of inheritance (OR=9.00, 95%CI=1.17-69.07, p=0.035), and marginally significant association found in the genotypic model (OR=7.50, 95%CI=0.94-60.06, p=0.058). The rs7092484 SNP of the TCF7L2 gene also showed markedly significant association with T2DM in the recessive (OR=3.61, 95%CI=1.70-7.67, p=0.001); and allelic (OR=1.80, 95%CI=1.23-2.62, p=0.002) models. Moreover, eight haplotypes of the IRS1 gene found to have significant association withT2DM (p=0.013 to 0.049). Assessments made on the interactions of genotypes of the rs13431179 and rs7092484 SNPs with various parameters demonstrated that high density lipoprotein (HDL), low density lipoprotein (LDL), waist circumference (WC), and systolic blood pressure (SBP) are the best T2DM onset predicting models. Furthermore, genotypes of the rs7092484 SNP showed significant association with various atherogenic indexes (Atherogenic index of plasma, LDL/HDL, and CHLO/HDL); and Eritreans carrying the GG or GA genotypes were predicted to be more susceptible to cardiovascular diseases onset. Conclusions: Results of this study suggest that IRS1 (rs13431179) and TCF7L2 (rs7092484) gene polymorphisms are associated with increased risk of T2DM in Eritreans.

Keywords: IRS1, SNP, TCF7L2, type 2 diabetes

Procedia PDF Downloads 225
230 Using Locus Equations for Berber Consonants Labiovellarization

Authors: Ali Benali Djouher Leila

Abstract:

Labiovelarization of velar consonants and labials is a very widespread phenomenon. It is attested in all the major northern Berber dialects. Only the Tuareg is totally unaware of it. But, even within the large Berber-speaking regions of the north, it is very unstable: it may be completely absent in certain dialects (such as the Bougie region in Kabylie), and its extension and frequency can vary appreciably between the dialects which know it. Some dialects of Great Kabylia or the Chleuh domain, for example, "labiovélarize" more than others from the same region. Thus, in Great Kabylia, the adjective "large" will be pronounced: amqqwran with the At Yiraten and amqqran with the At Yanni, a few kilometers away. One of the problems with them is deciding whether it is one or two phonemes. All the criteria used by linguists in this kind of case lead to the conclusion that they are unique phonemes (a phoneme and not a succession of two phonemes, / k + w /, for example). The phonetic and phonological criteria are moreover clearly confirmed by the morphological data since, in the system of verbal alternations, these complex segments are treated as single phonemes: agree, "to draw, to fetch water," akwer, "to fly," have exactly the same morphology as as "jealous," arem" taste," Ames, "dirty" or afeg, "steal" ... verbs with two radical consonants (type aCC). At the level of notation, both scientific and usual, it is, therefore, necessary to represent the labiovélarized by a single letter, possibly accompanied by a diacritic. In fact, actual practices are diverse. - The scientific representation of type does not seem adequate for current use because its realization is easy only on a microcomputer. The Berber Documentation File used a small ° (of n °) above the writing line: k °, g ° ... which has the advantage of being easy to achieve since it is part of general typographical conventions in Latin script and that it is present on a typewriter keyboard. Mouloud Mammeri, then the Berber Study Group of Vincennes (Tisuraf review), and a majority of Kabyle practitioners over the last twenty years have used the succession "consonant +" semi-vowel / w / "(CW) on the same line of writing; for all the reasons explained previously, this practice is not a good solution and should be abandoned, especially as it particularizes Kabyle in the Berber ensemble. In this study, we were interested in two velar consonants, / g / and / k /, labiovellarized: / gw / and the / kw / (we adopted the addition of the "w") for the representation for ease of writing in graphical mode. It is a question of trying to characterize these four consonants in order to see if they have different places of articulation and if they are distinct (if these velars are distinct from their labiovellarized counterpart). This characterization is done using locus equations.

Keywords: berber consonants;, labiovelarization, locus equations, acoustical caracterization, kabylian dialect, algerian language

Procedia PDF Downloads 76
229 Estimating the Relationship between Education and Political Polarization over Immigration across Europe

Authors: Ben Tappin, Ryan McKay

Abstract:

The political left and right appear to disagree not only over questions of value but, also, over questions of fact—over what is true “out there” in society and the world. Alarmingly, a large body of survey data collected during the past decade suggests that this disagreement tends to be greatest among the most educated and most cognitively sophisticated opposing partisans. In other words, the data show that these individuals display the widest political polarization in their reported factual beliefs. Explanations of this polarization pattern draw heavily on cultural and political factors; yet, the large majority of the evidence originates from one cultural and political context—the United States, a country with a rather unique cultural and political history. One consequence is that widening political polarization conditional on education and cognitive sophistication may be due to idiosyncratic cultural, political or historical factors endogenous to US society—rather than a more general, international phenomenon. We examined widening political polarization conditional on education across Europe, over a topic that is culturally and politically contested; immigration. To do so, we analyzed data from the European Social Survey, a premier survey of countries in and around the European area conducted biennially since 2002. Our main results are threefold. First, we see widening political polarization conditional on education over beliefs about the economic impact of immigration. The foremost countries showing this pattern are the most influential in Europe: Germany and France. However, we also see heterogeneity across countries, with some—such as Belgium—showing no evidence of such polarization. Second, we find that widening political polarization conditional on education is a product of sorting. That is, highly educated partisans exhibit stronger within-group consensus in their beliefs about immigration—the data do not support the view that the more educated partisans are more polarized simply because the less educated fail to adopt a position on the question. Third, and finally, we find some evidence that shocks to the political climate of countries in the European area—for example, the “refugee crisis” of summer 2015—were associated with a subsequent increase in political polarization over immigration conditional on education. The largest increase was observed in Germany, which was at the centre of the so-called refugee crisis in 2015. These results reveal numerous insights: they show that widening political polarization conditional on education is not restricted to the US or native English-speaking culture; that such polarization emerges in the domain of immigration; that it is a product of within-group consensus among the more educated; and, finally, that exogenous shocks to the political climate may be associated with subsequent increases in political polarization conditional on education.

Keywords: beliefs, Europe, immigration, political polarization

Procedia PDF Downloads 147
228 Sensing Study through Resonance Energy and Electron Transfer between Föster Resonance Energy Transfer Pair of Fluorescent Copolymers and Nitro-Compounds

Authors: Vishal Kumar, Soumitra Satapathi

Abstract:

Föster Resonance Energy Transfer (FRET) is a powerful technique used to probe close-range molecular interactions. Physically, the FRET phenomenon manifests as a dipole–dipole interaction between closely juxtaposed fluorescent molecules (10–100 Å). Our effort is to employ this FRET technique to make a prototype device for highly sensitive detection of environment pollutant. Among the most common environmental pollutants, nitroaromatic compounds (NACs) are of particular interest because of their durability and toxicity. That’s why, sensitive and selective detection of small amounts of nitroaromatic explosives, in particular, 2,4,6-trinitrophenol (TNP), 2,4-dinitrotoluene (DNT) and 2,4,6-trinitrotoluene (TNT) has been a critical challenge due to the increasing threat of explosive-based terrorism and the need of environmental monitoring of drinking and waste water. In addition, the excessive utilization of TNP in several other areas such as burn ointment, pesticides, glass and the leather industry resulted in environmental accumulation, and is eventually contaminating the soil and aquatic systems. To the date, high number of elegant methods, including fluorimetry, gas chromatography, mass, ion-mobility and Raman spectrometry have been successfully applied for explosive detection. Among these efforts, fluorescence-quenching methods based on the mechanism of FRET show good assembly flexibility, high selectivity and sensitivity. Here, we report a FRET-based sensor system for the highly selective detection of NACs, such as TNP, DNT and TNT. The sensor system is composed of a copolymer Poly [(N,N-dimethylacrylamide)-co-(Boc-Trp-EMA)] (RP) bearing tryptophan derivative in the side chain as donor and dansyl tagged copolymer P(MMA-co-Dansyl-Ala-HEMA) (DCP) as an acceptor. Initially, the inherent fluorescence of RP copolymer is quenched by non-radiative energy transfer to DCP which only happens once the two molecules are within Förster critical distance (R0). The excellent spectral overlap (Jλ= 6.08×10¹⁴ nm⁴M⁻¹cm⁻¹) between donors’ (RP) emission profile and acceptors’ (DCP) absorption profile makes them an exciting and efficient FRET pair i.e. further confirmed by the high rate of energy transfer from RP to DCP i.e. 0.87 ns⁻¹ and lifetime measurement by time correlated single photon counting (TCSPC) to validate the 64% FRET efficiency. This FRET pair exhibited a specific fluorescence response to NACs such as DNT, TNT and TNP with 5.4, 2.3 and 0.4 µM LODs, respectively. The detection of NACs occurs with high sensitivity by photoluminescence quenching of FRET signal induced by photo-induced electron transfer (PET) from electron-rich FRET pair to electron-deficient NAC molecules. The estimated stern-volmer constant (KSV) values for DNT, TNT and TNP are 6.9 × 10³, 7.0 × 10³ and 1.6 × 104 M⁻¹, respectively. The mechanistic details of molecular interactions are established by time-resolved fluorescence, steady-state fluorescence and absorption spectroscopy confirmed that the sensing process is of mixed type, i.e. both dynamic and static quenching as lifetime of FRET system (0.73 ns) is reduced to 0.55, 0.57 and 0.61 ns DNT, TNT and TNP, respectively. In summary, the simplicity and sensitivity of this novel FRET sensor opens up the possibility of designing optical sensor of various NACs in one single platform for developing multimodal sensor for environmental monitoring and future field based study.

Keywords: FRET, nitroaromatic, stern-Volmer constant, tryptophan and dansyl tagged copolymer

Procedia PDF Downloads 134
227 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 105
226 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 83
225 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 105
224 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 148