Search results for: universal generating function (UMGF)
1023 De Novo Design of Functional Metalloproteins for Biocatalytic Reactions
Authors: Ketaki D. Belsare, Nicholas F. Polizzi, Lior Shtayer, William F. DeGrado
Abstract:
Nature utilizes metalloproteins to perform chemical transformations with activities and selectivities that have long been the inspiration for design principles in synthetic and biological systems. The chemical reactivities of metalloproteins are directly linked to local environment effects produced by the protein matrix around the metal cofactor. A complete understanding of how the protein matrix provides these interactions would allow for the design of functional metalloproteins. The de novo computational design of proteins have been successfully used in design of active sites that bind metals like di-iron, zinc, copper containing cofactors; however, precisely designing active sites that can bind small molecule ligands (e.g., substrates) along with metal cofactors is still a challenge in the field. The de novo computational design of a functional metalloprotein that contains a purposefully designed substrate binding site would allow for precise control of chemical function and reactivity. Our research strategy seeks to elucidate the design features necessary to bind the cofactor protoporphyrin IX (hemin) in close proximity to a substrate binding pocket in a four helix bundle. First- and second-shell interactions are computationally designed to control orientation, electronic structure, and reaction pathway of the cofactor and substrate. The design began with a parameterized helical backbone that positioned a single histidine residue (as an axial ligand) to receive a second-shell H-bond from a Threonine on the neighboring helix. The metallo-cofactor, hemin was then manually placed in the binding site. A structural feature, pi-bulge was introduced to give substrate access to the protoporphyrin IX. These de novo metalloproteins are currently being tested for their activity towards hydroxylation and epoxidation. The de novo designed protein shows hydroxylation of aniline to 4-aminophenol. This study will help provide structural information of utmost importance in understanding de novo computational design variables impacting the functional activities of a protein.Keywords: metalloproteins, protein design, de novo protein, biocatalysis
Procedia PDF Downloads 1511022 To Evaluate the Function of Cardiac Viability After Administration of I131
Authors: Baburao Ganpat Apte, Gajodhar
Abstract:
Introduction: diopathic Parkinson’s disease (PD) is the most common neurodegenerative disorder. Early PD may present a diagnostic challenge with broad differential diagnoses that are not associated with striatal dopamine deficiency. This test was performed by using special type of radioactive precursor which was made available through our logistics. 131I-TOPA L-6-[131I] Iodo-3,4-Trihydroxyphenylalnine (131I -TOPA) is a positron emission tomography (PET) agent that measures the uptake of dopamine precursors for assessment of presynaptic dopaminergic integrity and has been shown to accurately reflect the sign of nervous mind going in patients suffers from monoaminergic disturbances in PD. Both qualitative and quantitative analyses of the scans were performed. Therefore, the early clinical diagnosis alone may be accurate and this reinforces the importance of functional imaging targeting the patholigically of the disease process. The patient’s medical records were then assessed for length of follow-up, response to levotopa, clinical course of sickness, and usually though of symptoms at time of 131I -TOPA PET. A respective analysis was carried out for all patients that gone through 131I -TOPA PET brain scan for motor symptoms suspicious for PD between 2000 - 2006. The eventual diagnosis by the referring neurologist, movement therapist, physiotherapist, was used as the accurate measurements in standard for further analysis. In this study, our goal to illustrate our local experience to determine the accuracy of 131I -TOPA PET for diagnosis of PD. We studied a total of 48 patients. Of the 25 scans, it found that one was a false negative, 40 were true positives, and 7 were true negatives. The resultant values are Sensitivity 90.4% (95% CI: 100%-71.3%), Specificity 100% (92% CI: 100%-58.0%), PPV 100% (91% CI 100%-75.7%), and NPV 80.5% (95% CI: 92.5%-48.5%). Result: Twenty-three patients were found in the initial query, and 1 were excluded (2 uncertain diagnosis, 2 inadequate follow-up). Twenty-eight patients (28 scans) remained with 15 males (62%) and 8 females (30%). All the patients had a clinical follow-up of at least 3 years, however the median length of follow-up was 5.5 years (range: 2-8 years). The median age at scan time was 51.2 years (range: 35-75)Keywords: 18F-TOPA, petct, parkinson’s disease, cardiac
Procedia PDF Downloads 271021 Neuroprotective Effect of Chrysin on Thioacetamide-Induced Hepatic Encephalopathy in Rats: Role of Oxidative Stress and TLR-4/NF-κB Pathway
Authors: S. A. El-Marasy, S. A. El Awdan, R. M. Abd-Elsalam
Abstract:
This study aimed to investigate the possible neuroprotective effect of chrysin on thioacetamide (TAA)-induced hepatic encephalopathy in rats. Also, the effect of chrysin on motor impairment, cognitive deficits, oxidative stress, neuroinflammation, apoptosis and histopathological damage was assessed. Male Wistar rats were randomly allocated into five groups. The first group received the vehicle (distilled water) for 21 days and is considered as normal group. While the second one received intraperitoneal dose of TAA (200 mg/kg) at three alternative days during the third week of the experiment to induce HE and is considered as control group. The other three groups were orally administered chrysin for 21 days (25, 50, 100 mg/kg) and starting from day 17; rats received intraperitoneal dose of TAA (200 mg/kg) at three alternative days. Then behavioral, biochemical, histopathological and immunohistochemical analyses were assessed. Then behavioral, biochemical, histopathological and immunohistochemical analyses were assessed. Chrysin reversed TAA-induced motor coordination in rotarod test, cognitive deficits in object recognition test (ORT) and attenuated serum ammonia, hepatic liver enzymes, reduced malondialdehyde (MDA), elevated reduced glutathione (GSH), reduced nuclear factor kappa B (NF-κB), tumor necrosis factor-alpha (TNF-α) and Interleukin-6 (IL-6) brain contents. Chrysin administration also reduced Toll-4 receptor (TLR-4) gene expression, caspase-3 protein expression, hepatic necrosis and astrocyte swelling. This study depicts that chrysin exerted neuroprotective effect in TAA-induced HE rats, evidenced by improvement of cognitive deficits, motor incoordination and histopathological changes such as astrocyte swelling and vacuolization; hallmarks in HE, via reducing hyperammonemia, ameliorating hepatic function, in addition to its anti-oxidant, inactivation of TLR-4/NF-κB inflammatory pathway, and anti-apoptotic effects.Keywords: chrysin, hepatic encephalopathy, oxidative stress, rats, thioacetamide, TLR4/NF-κB pathway
Procedia PDF Downloads 1611020 T Cell Immunity Profile in Pediatric Obesity and Asthma
Authors: Mustafa M. Donma, Erkut Karasu, Burcu Ozdilek, Burhan Turgut, Birol Topcu, Burcin Nalbantoglu, Orkide Donma
Abstract:
The mechanisms underlying the association between obesity and asthma may be related to a decreased immunological tolerance induced by a defective function of regulatory T cells (Tregs). The aim of this study is to establish the potential link between these diseases and CD4+, CD25+ FoxP3+ Tregs as well as T helper cells (Ths) in children. This is a prospective case control study. Obese (n:40), asthmatic (n:40), asthmatic obese (n:40), and healthy children (n:40), who don't have any acute or chronic diseases, were included in this study. Obese children were evaluated according to WHO criteria. Asthmatic patients were chosen based on GINA criteria. Parents were asked to fill up the questionnaire. Informed consent forms were taken. Blood samples were marked with CD4+, CD25+ and FoxP3+ in order to determine Tregs and Ths by flow cytometric method. Statistical analyses were performed. p≤0.05 was chosen as meaningful threshold. Tregs exhibiting anti-inflammatory nature were significantly lower in obese (0,16%; p≤0,001), asthmatic (0,25%; p≤0,01) and asthmatic obese (0,29%; p≤0,05) groups than the control group (0,38%). Ths were counted higher in asthma group than the control (p≤0,01) and obese (p≤0,001)) groups. T cell immunity plays important roles in obesity and asthma pathogeneses. Decreased numbers of Tregs found in obese, asthmatic and asthmatic obese children may help to elucidate some questions in pathophysiology of these diseases. For HOMA-IR levels, any significant difference was not noted between control and obese groups, but statistically higher values were found for obese asthmatics. The values obtained in all groups were found to be below the critical cut off points. This finding has made the statistically significant difference observed between Tregs of obese, asthmatic, obese asthmatic, and control groups much more valuable. These findings will be useful in diagnosis and treatment of these disorders and future studies are needed. The production and propagation of Tregs may be promising in alternative asthma and obesity treatments.Keywords: asthma, flow cytometry, pediatric obesity, T cells
Procedia PDF Downloads 3461019 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 1781018 New Roles of Telomerase and Telomere-Associated Proteins in the Regulation of Telomere Length
Authors: Qin Yang, Fan Zhang, Juan Du, Chongkui Sun, Krishna Kota, Yun-Ling Zheng
Abstract:
Telomeres are specialized structures at chromosome ends consisting of tandem repetitive DNA sequences [(TTAGGG)n in humans] and associated proteins, which are necessary for telomere function. Telomere lengths are tightly regulated within a narrow range in normal human somatic cells, the basis of cellular senescence and aging. Previous studies have extensively focused on how short telomeres are extended and have demonstrated that telomerase plays a central role in telomere maintenance through elongating the short telomeres. However, the molecular mechanisms of regulating excessively long telomeres are unknown. Here, we found that telomerase enzymatic component hTERT plays a dual role in the regulation of telomeres length. We analyzed single telomere alterations at each chromosomal end led to the discoveries that hTERT shortens excessively long telomeres and elongates short telomeres simultaneously, thus maintaining the optimal telomere length at each chromosomal end for an efficient protection. The hTERT-mediated telomere shortening removes large segments of telomere DNA rapidly without inducing telomere dysfunction foci or affecting cell proliferation, thus it is mechanistically distinct from rapid telomere deletion. We found that expression of hTERT generates telomeric circular DNA, suggesting that telomere homologous recombination may be involved in this telomere shortening process. Moreover, the hTERT-mediated telomere shortening is required its enzymatic activity, but telomerase RNA component hTR is not involved in it. Furthermore, shelterin protein TPP1 interacts with hTERT and recruits it on telomeres to mediate telomere shortening. In addition, telomere-associated proteins, DKC1 and TCAB1 also play roles in this process. This novel hTERT-mediated telomere shortening mechanism not only exists in cancer cells, but also in primary human cells. Thus, the hTERT-mediated telomere shortening is expected to shift the paradigm on current molecular models of telomere length maintenance, with wide-reaching consequences in cancer and aging fields.Keywords: aging, hTERT, telomerase, telomeres, human cells
Procedia PDF Downloads 4271017 Pattern of Anisometropia, Management and Outcome of Anisometropic Amblyopia
Authors: Husain Rajib, T. H. Sheikh, D. G. Jewel
Abstract:
Background: Amblyopia is a frequent cause of monocular blindness in children. It can be unilateral or bilateral reduction of best corrected visual acuity associated with decrement in visual processing, accomodation, motility, spatial perception or spatial projection. Anisometropia is an important risk factor for amblyopia that develops when unequal refractive error causes the image to be blurred in the critical developmental period and central inhibition of the visual signal originating from the affected eye associated with significant visual problems including anisokonia, strabismus, and reduced stereopsis. Methods: It is a prospective hospital based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 50 anisometropic amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 3 to 13 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over month, then occlusion treatment was started. Occlusion was done daily for 6-8 hours (full time) together with vision therapy. The occlusion was carried out for 3 months. Results: In this study about 8% subjects had anisometropia from myopia, 18% from hyperopia, 74% from astigmatism. The initial mean visual acuity was 0.74 ± 0.39 Log MAR and after intervention of amblyopia therapy with active vision therapy mean visual acuity was 0.34 ± 0.26 Log MAR. About 94% of subjects were improving at least two lines. The depth of amblyopia associated with type of anisometropic refractive error and magnitude of Anisometropia (p<0.005). By doing this study 10% mild amblyopia, 64% moderate and 26% severe amblyopia were found. Binocular function also decreases with magnitude of Anisometropia. Conclusion: Anisometropic amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Occlusion therapy with at least one instructed hour of active visual activity practiced out of school hours was effective in anisometropic amblyopes who were diagnosed at the age of 8 years and older, and the patients complied well with the treatment.Keywords: refractive error, anisometropia, amblyopia, strabismic amblyopia
Procedia PDF Downloads 2761016 The Visualization of Hydrological and Hydraulic Models Based on the Platform of Autodesk Civil 3D
Authors: Xiyue Wang, Shaoning Yan
Abstract:
Cities in China today is faced with an increasingly serious river ecological crisis accompanying with the development of urbanization: waterlogging on account of the fragmented urban natural hydrological system; the limited ecological function of the hydrological system caused by a destruction of water system and waterfront ecological environment. Additionally, the eco-hydrological processes of rivers are affected by various environmental factors, which are more complex in the context of urban environment. Therefore, efficient hydrological monitoring and analysis tools, accurate and visual hydrological and hydraulic models are becoming more important basis for decision-makers and an important way for landscape architects to solve urban hydrological problems, formulating sustainable and forward-looking schemes. The study mainly introduces the river and flood analysis model based on the platform of Autodesk Civil 3D. Taking the Luanhe River in Qian'an City of Hebei Province as an example, the 3D models of the landform, river, embankment, shoal, pond, underground stream and other land features were initially built, with which the water transfer simulation analysis, river floodplain analysis, and river ecology analysis were carried out, ultimately the real-time visualized simulation and analysis of rivers in various hypothetical scenarios were realized. Through the establishment of digital hydrological and hydraulic model, the hydraulic data can be accurately and intuitively simulated, which provides basis for rational water system and benign urban ecological system design. Though, the hydrological and hydraulic model based on Autodesk Civil3D own its boundedness: the interaction between the model and other data and software is unfavorable; the huge amount of 3D data and the lack of basic data restrict the accuracy and application range. The hydrological and hydraulic model based on Autodesk Civil3D platform provides more possibility to access convenient and intelligent tool for urban planning and monitoring, a solid basis for further urban research and design.Keywords: visualization, hydrological and hydraulic model, Autodesk Civil 3D, urban river
Procedia PDF Downloads 2971015 Speaking of Genocide: Lithuanian 'Occupation’ Museums and Foucault's Discursive Formation
Authors: Craig Wight
Abstract:
Tourism visits to sites associated to varying degrees with death and dying have for some time inspired academic debate and research into what has come to be popularly described as ‘dark tourism’. Research to date has been based on the mobilisation of various social scientific methodologies to understand issues such as the motivations of visitors to consume dark tourism experiences and visitor interpretations of the various narratives that are part of the consumption experience. This thesis offers an alternative conceptual perspective for carrying out research into dark tourism by presenting a discourse analysis of Lithuanian occupation-themed museums using Foucault’s concept of ‘discursive formation’ from ‘Archaeology of Knowledge’. A constructivist methodology is therefore applied to locate the rhetorical representations of Lithuanian and Jewish subject positions and to identify the objects of discourse that are produced in five museums that interpret a historical era defined by occupation, the persecution of people and genocide. The discourses and consequent cultural function of these museums are examined, and the key finding of the research proposes that they authorise a particular Lithuanian individualism which marginalises the Jewish subject position and its related objects of discourse into abstraction. The thesis suggests that these museums create the possibility to undermine the ontological stability of Holocaust and the Jewish-Lithuanian subject which is produced as an anomalous, ‘non-Lithuanian’ cultural reference point. As with any Foucauldian archaeological research, it cannot be offered as something that is ‘complete’ since it captures only a partial field, or snapshot of knowledge, bound to a specific temporal and spatial context. The discourses that have been identified are perhaps part of a more elusive ‘positivity’ which is salient across a number of cultural and political surfaces which are ripe for a similar analytical approach in future. It is hoped that the study will motivate others to follow a discourse-analytical approach to research in order to further understand the critical role of museums in public culture when it comes to shaping knowledge about ‘inconvenient’ pasts.Keywords: genocide heritage, foucault, Lithuanian tourism, discursive formatoin
Procedia PDF Downloads 2321014 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 1571013 A Combined Fiber-Optic Surface Plasmon Resonance and Ta2O5: rGO Nanocomposite Synergistic Scheme for Trace Detection of Insecticide Fenitrothion
Authors: Ravi Kant, Banshi D. Gupta
Abstract:
The unbridled application of insecticides to enhance agricultural yield has become a matter of grave concern to both the environment and the human health and, thus pose a potential threat to sustainable development. Fenitrothion is an extensively used organophosphate insecticide whose residues are reported to be extremely toxic for birds, humans and aquatic life. A sensitive, swift and accurate detection protocol for fenitrothion is, thus, highly demanded. In this work, we report an SPR based fiber optic sensor for the detection of fenitrothion, where a nanocomposite arrangement of Ta2O5 and reduced graphene oxide (rGO) (Ta₂O₅: rGO) decorated on silver coated unclad core region of an optical fiber forms the sensing channel. A nanocomposite arrangement synergistically integrates the properties of involved components and consequently furnishes a conducive framework for sensing applications. The modification of the dielectric function of the sensing layer on exposure to fenitrothion solutions of diverse concentration forms the sensing mechanism. This modification is reflected in terms of the shift in resonance wavelength. Experimental variables such as the concentration of rGO in the nanocomposite configuration, dip time of silver coated fiber optic probe for deposition of sensing layer and influence of pH on the performance of the sensor have been optimized to extract the best performance of the sensor. SPR studies on the optimized sensing probe reveal the high sensitivity, wide operating range and good reproducibility of the fabricated sensor, which unveil the promising utility of Ta₂O₅: rGO nanocomposite framework for developing an efficient detection methodology for fenitrothion. FOSPR approach in cooperation with nanomaterials projects the present work as a beneficial approach for fenitrothion detection by imparting numerous useful advantages such as sensitivity, selectivity, compactness and cost-effectiveness.Keywords: surface plasmon resonance, optical fiber, sensor, fenitrothion
Procedia PDF Downloads 2081012 The Impact of the Fitness Center Ownership Structure on the Service Quality Perception in the Fitness in Serbia
Authors: Dragan Zivotic, Mirjana Ilic, Aleksandra Perovic, Predrag Gavrilovic
Abstract:
As with the provision of other services, the service quality perception is one of the key factors that the modern manager must pay attention to. Countries in which the state regulation is in transition also have specific features in providing fitness services. Identification of the dimensions in which the most significant different service quality perception between different types of fitness centers, enables managers to profile the offer according to the wishes and expectations of users. The aim of the paper was the comparison of the quality of services perception in the field of fitness in Serbia between three categories of fitness centers: the privately owned centers, the publicly owned centers, and the Public-private partnership centers. For this research 350 respondents of both genders (174 men and 176 women) were interviewed, aged between 18 and 68 years, being beneficiaries of fitness services for at least 1 year. Administered questionnaire with 100 items provided information about the 15 basic areas in which they expressed the service quality perception in the gym. The core sample was composed of 212 service users in private fitness centers, 69 service users in public fitness centers and 69 service users in the public-private partnership. Sub-samples were equal in representation of women and men, as well as by age and length of use of fitness services. The obtained results were subject of univariate analysis with the Kruskal-Wallis non-parametric analysis of variance. Significant differences between the analyzed sub-samples were not found solely in the areas of rapid response and quality outcomes. In the multivariate model, the results were processed by backward stepwise discriminant analysis that extracted 3 areas that maximize the differences between sub-samples: material and technical basis, secondary facilities and coaches. By applying the classification function 93.87% of private centers services users, 62.32% of public centers services users and 85.51% of the public-private partnership centers users of services were correctly classified (total 86.00%). These results allow optimizing the allocation of the necessary resources in profiling offers of a fitness center in order to optimally adjust it to the user’s needs and expectations.Keywords: fitness, quality perception, management, public ownership, private ownership, public-private partnership, discriminative analysis
Procedia PDF Downloads 2931011 Disaster Management Supported by Unmanned Aerial Systems
Authors: Agoston Restas
Abstract:
Introduction: This paper describes many initiatives and shows also practical examples which happened recently using Unmanned Aerial Systems (UAS) to support disaster management. Since the operation of manned aircraft at disasters is usually not only expensive but often impossible to use as well, in many cases managers fail to use the aerial activity. UAS can be an alternative moreover cost-effective solution for supporting disaster management. Methods: This article uses thematic division of UAS applications; it is based on two key elements, one of them is the time flow of managing disasters, other is its tactical requirements. Logically UAS can be used like pre-disaster activity, activity immediately after the occurrence of a disaster and the activity after the primary disaster elimination. Paper faces different disasters, like dangerous material releases, floods, earthquakes, forest fires and human-induced disasters. Research used function analysis, practical experiments, mathematical formulas, economic analysis and also expert estimation. Author gathered international examples and used own experiences in this field as well. Results and discussion: An earthquake is a rapid escalating disaster, where, many times, there is no other way for a rapid damage assessment than aerial reconnaissance. For special rescue teams, the UAS application can help much in a rapid location selection, where enough place remained to survive for victims. Floods are typical for a slow onset disaster. In contrast, managing floods is a very complex and difficult task. It requires continuous monitoring of dykes, flooded and threatened areas. UAS can help managers largely keeping an area under observation. Forest fires are disasters, where the tactical application of UAS is already well developed. It can be used for fire detection, intervention monitoring and also for post-fire monitoring. In case of nuclear accident or hazardous material leakage, UAS is also a very effective or can be the only one tool for supporting disaster management. Paper shows some efforts using UAS to avoid human-induced disasters in low-income countries as part of health cooperation.Keywords: disaster management, floods, forest fires, Unmanned Aerial Systems
Procedia PDF Downloads 2371010 Continuous-Time Convertible Lease Pricing and Firm Value
Authors: Ons Triki, Fathi Abid
Abstract:
Along with the increase in the use of leasing contracts in corporate finance, multiple studies aim to model the credit risk of the lease in order to cover the losses of the lessor of the asset if the lessee goes bankrupt. In the current research paper, a convertible lease contract is elaborated in a continuous time stochastic universe aiming to ensure the financial stability of the firm and quickly recover the losses of the counterparties to the lease in case of default. This work examines the term structure of the lease rates taking into account the credit default risk and the capital structure of the firm. The interaction between the lessee's capital structure and the equilibrium lease rate has been assessed by applying the competitive lease market argument developed by Grenadier (1996) and the endogenous structural default model set forward by Leland and Toft (1996). The cumulative probability of default was calculated by referring to Leland and Toft (1996) and Yildirim and Huan (2006). Additionally, the link between lessee credit risk and lease rate was addressed so as to explore the impact of convertible lease financing on the term structure of the lease rate, the optimal leverage ratio, the cumulative default probability, and the optimal firm value by applying an endogenous conversion threshold. The numerical analysis is suggestive that the duration structure of lease rates increases with the increase in the degree of the market price of risk. The maximal value of the firm decreases with the effect of the optimal leverage ratio. The results are indicative that the cumulative probability of default increases with the maturity of the lease contract if the volatility of the asset service flows is significant. Introducing the convertible lease contract will increase the optimal value of the firm as a function of asset volatility for a high initial service flow level and a conversion ratio close to 1.Keywords: convertible lease contract, lease rate, credit-risk, capital structure, default probability
Procedia PDF Downloads 981009 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 891008 The Comparison of the Effects of Adipose-Derived Mesenchymal Stem Cells Delivery by Systemic and Intra-Tracheal Injection on Elastase-Induced Emphysema Model
Authors: Maryam Radan, Fereshteh Nejad Dehbashi, Vahid Bayati, Mahin Dianat, Seyyed Ali Mard, Zahra Mansouri
Abstract:
Pulmonary emphysema is a pathological respiratory condition identified by alveolar destruction which leads to limitation of airflow and diminished lung function. A substantial body of evidence suggests that mesenchymal stem cells (MSCs) have the ability to induce tissue repair primarily through a paracrine effect. In this study, we aimed to determine the efficacy of Intratracheal adipose-derived mesenchymal stem cells (ADSCs) therapy in comparison to this approach with that of Intravenous (Systemic) therapy. Fifty adult male Sprague–Dawley rats weighing between 180 and 200 g were used in this experiment. The animals were randomized to Control groups (Intratracheal or Intravenous vehicle), Elastase group (intratracheal administration of porcine pancreatic elastase; 25 U/kg on day 0 and day 10th), Elastase+Intratracheal ADSCs therapy (1x107 Cells, on day 28) and Elastase+Systemic ADSCs therapy (1x107 Cells, on day 28). The rats which not subjected to any treatment, considered as the control. All rats were sacrificed 3 weeks later. Morphometric findings in lung tissues (Mean linear intercept) confirmed the establishment of the emphysema model via alveolar disruption. Contrarily, ADSCs administration partially restored alveolar architecture. These results were associated with improving arterial oxygenation, reducing lung edema, and decreasing lung inflammation with higher significant effects in the Intratracheal therapy route. These results documented that the efficacy of intratracheal ADSCs was comparable with intravenous ADSCs therapy. Accordingly, the obtained data suggested that intratracheal delivery of ADSCs would enhance lung repair in pulmonary emphysema. Moreover, this method provides benefits over a systemic administration, such as the reduction of cell number and the low risk to engraft other organs.Keywords: mesenchymal stem cell, emphysema, Intratracheal, systemic
Procedia PDF Downloads 2111007 Field Trial of Resin-Based Composite Materials for the Treatment of Surface Collapses Associated with Former Shallow Coal Mining
Authors: Philip T. Broughton, Mark P. Bettney, Isla L. Smail
Abstract:
Effective treatment of ground instability is essential when managing the impacts associated with historic mining. A field trial was undertaken by the Coal Authority to investigate the geotechnical performance and potential use of composite materials comprising resin and fill or stone to safely treat surface collapses, such as crown-holes, associated with shallow mining. Test pits were loosely filled with various granular fill materials. The fill material was injected with commercially available silicate and polyurethane resin foam products. In situ and laboratory testing was undertaken to assess the geotechnical properties of the resultant composite materials. The test pits were subsequently excavated to assess resin permeation. Drilling and resin injection was easiest through clean limestone fill materials. Recycled building waste fill material proved difficult to inject with resin; this material is thus considered unsuitable for use in resin composites. Incomplete resin permeation in several of the test pits created irregular ‘blocks’ of composite. Injected resin foams significantly improve the stiffness and resistance (strength) of the un-compacted fill material. The stiffness of the treated fill material appears to be a function of the stone particle size, its associated compaction characteristics (under loose tipping) and the proportion of resin foam matrix. The type of fill material is more critical than the type of resin to the geotechnical properties of the composite materials. Resin composites can effectively support typical design imposed loads. Compared to other traditional treatment options, such as cement grouting, the use of resin composites is potentially less disruptive, particularly for sites with limited access, and thus likely to achieve significant reinstatement cost savings. The use of resin composites is considered a suitable option for the future treatment of shallow mining collapses.Keywords: composite material, ground improvement, mining legacy, resin
Procedia PDF Downloads 3551006 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks
Procedia PDF Downloads 3541005 Leisure, Domestic or Professional Activities so as to Prevent Cognitive Decline: Results FreLE Longitudinal Study
Authors: Caroline Dupre, David Hupin, Christ Goumou, Francois Belan, Frederic Roche, Thomas Celarier, Bienvenu Bongue
Abstract:
Background: Previous cohorts have been notably criticized for not studying the different type of physical activity and not investigating household activities. The objective of this work was to analyse the relationship between physical activity and cognitive decline in older people living in the community. Impact of type of physical activity on the results has been realised. Methods: The study used data from the longitudinal and observational study , FrèLE (FRagility: Longitudinal Study of Expressions). The collected data included: socio-demographic variables, lifestyle, and health status (frailty, comorbidities, cognitive status, depression). Cognitive decline was assessed by using: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Physical activity was assessed by the Physical Activity Scale for the Elderly (PASE). This tool is structured in three sections: the leisure activity, domestic activity, and professional activity. Logistic regressions and proportional hazards regression models (Cox) were used to estimate the risk of cognitive disorders. Results: At baseline, the prevalence of cognitive disorders was 6.9% according to MMSE. In total, 1167 participants without cognitive disorders were included in the analysis. The mean age was 77.4 years, and 52.1% of the participants were women. After a 2 years long follow-up, we found cognitive disorders on 53 participants (4.5%). Physical activity at baseline is lower in older adults for whom cognitive decline was observed after two years of follow-up. Subclass analyses showed that leisure and domestic activities were associated with cognitive decline, but not professional activities. Conclusions: Analysis showed a relationship between cognitive disorders and type of physical activity. The current study will be completed by the MoCA for mild cognitive impairment. These findings compared to other ongoing studies, will contribute to the debate on the beneficial effects of physical activity on cognition.Keywords: aging, cognitive function, physical activity, mixed models
Procedia PDF Downloads 1261004 Early and Mid-Term Results of Anesthetic Management of Minimal Invasive Coronary Artery Bypass Grafting Using One Lung Ventilation
Authors: Devendra Gupta, S. P. Ambesh, P. K Singh
Abstract:
Introduction: Minimally invasive coronary artery bypass grafting (MICABG) is a less invasive method of performing surgical revascularization. Minimally invasive direct coronary artery bypass (MIDCAB) provides many anesthetic challenges including one lung ventilation (OLV), managing myocardial ischemia, and pain. We present an early and midterm result of the use of this technique with OLV. Method: We enrolled 62 patients for analysis operated between 2008 and 2012. Patients were anesthetized and left endobronchial tube was placed. During the procedure left lung was isolated and one lung ventilation was maintained through right lung. Operation was performed utilizing off pump technique of coronary artery bypass grafting through a minimal invasive incision. Left internal mammary artery graft was done for single vessel disease and radial artery was utilized for other grafts if required. Postoperative ventilation was done with single lumen endotracheal tube. Median follow-up is 2.5 years (6 months to 4 years). Results: Median age was 58.5 years (41-77) and all were male. Single vessel disease was present in 36, double vessel in 24 and triple vessel disease in 2 patients. All the patients had normal left ventricular size and function. In 2 cases difficulty were encounter in placement of endobronchial tube. In 1 case cuff of endobronchial tube was ruptured during intubation. High airway pressure was developed on OLV in 1 case and surgery was accomplished with two lung anesthesia with low tidal volume. Mean postoperative ventilation time was 14.4 hour (11-22). There was no perioperative and 30 day mortality. Conversion to median sternotomy to complete the operation was done in 3.23% (2 out of 62 patients). One patient had acute myocardial infarction postoperatively and there were no deaths during follow-up. Conclusion: MICABG is a safe and effective method of revascularization with OLV in low risk candidates for coronary artery bypass grafting.Keywords: MIDCABG, one lung ventilation, coronary artery bypass grafting, endobronchial tube
Procedia PDF Downloads 4251003 Psychological Stress As A Catalyst For Multiple Sclerosis Progression: Clarifying Pathways From Neural Activation to Immune Dysregulation
Authors: Noah Emil Glisik
Abstract:
Multiple sclerosis (MS) is a chronic, immune-mediated disorder characterized by neurodegenerative processes and a highly variable disease course. Recent research highlights a complex interplay between psychological stress and MS progression, with both acute and chronic stressors linked to heightened inflammatory activity, increased relapse risk, and accelerated disability. This review synthesizes findings from systematic analyses, cohort studies, and neuroimaging investigations to examine how stress contributes to disease dynamics in MS. Evidence suggests that psychological stress influences MS progression through neural and physiological pathways, including dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis and heightened activity in specific brain regions, such as the insular cortex. Notably, functional MRI studies indicate that stress-induced neural activity may predict future atrophy in gray matter regions implicated in motor and cognitive function, thus supporting a neurobiological link between stress and neurodegeneration in MS. Longitudinal studies further associate chronic stress with reduced quality of life and higher relapse frequency, emphasizing the need for a multifaceted therapeutic approach that addresses both the physical and psychological dimensions of MS. Evidence from intervention studies suggests that stress management strategies, such as cognitive-behavioral therapy and mindfulness-based programs, may reduce relapse rates and mitigate lesion formation in MS patients. These findings underscore the importance of integrating stress-reducing interventions into standard MS care, with potential to improve disease outcomes and patient well-being. Further research is essential to clarify the causal pathways and develop targeted interventions that could modify the stress response in MS, offering an avenue to address disease progression and enhance quality of life.Keywords: multiple sclerosis, psychological stress, disease progression, neuroimaging, stress management
Procedia PDF Downloads 101002 Modelling Social Influence and Cultural Variation in Global Low-Carbon Vehicle Transitions
Authors: Hazel Pettifor, Charlie Wilson, David Mccollum, Oreane Edelenbosch
Abstract:
Vehicle purchase is a technology adoption decision that will strongly influence future energy and emission outcomes. Global integrated assessment models (IAMs) provide valuable insights into the medium and long terms effects of socio-economic development, technological change and climate policy. In this paper we present a unique and transparent approach for improving the behavioural representation of these models by incorporating social influence effects to more accurately represent consumer choice. This work draws together strong conceptual thinking and robust empirical evidence to introduce heterogeneous and interconnected consumers who vary in their aversion to new technologies. Focussing on vehicle choice, we conduct novel empirical research to parameterise consumer risk aversion and how this is shaped by social and cultural influences. We find robust evidence for social influence effects, and variation between countries as a function of cultural differences. We then formulate an approach to modelling social influence which is implementable in both simulation and optimisation-type models. We use two global integrated assessment models (IMAGE and MESSAGE) to analyse four scenarios that introduce social influence and cultural differences between regions. These scenarios allow us to explore the interactions between consumer preferences and social influence. We find that incorporating social influence effects into global models accelerates the early deployment of electric vehicles and stimulates more widespread deployment across adopter groups. Incorporating cultural variation leads to significant differences in deployment between culturally divergent regions such as the USA and China. Our analysis significantly extends the ability of global integrated assessment models to provide policy-relevant analysis grounded in real-world processes.Keywords: behavioural realism, electric vehicles, social influence, vehicle choice
Procedia PDF Downloads 1871001 The Importance of Changing the Traditional Mode of Higher Education in Bangladesh: Creating Huge Job Opportunities for Home and Abroad
Authors: M. M. Shahidul Hassan, Omiya Hassan
Abstract:
Bangladesh has set its goal to reach upper middle-income country status by 2024. To attain this status, the country must satisfy the World Bank requirement of achieving minimum Gross National Income (GNI). Number of youth job seekers in the country is increasing. University graduates are looking for decent jobs. So, the vital issue of this country is to understand how the GNI and jobs can be increased. The objective of this paper is to address these issues and find ways to create more job opportunities for youths at home and abroad which will increase the country’s GNI. The paper studies proportion of different goods Bangladesh exported, and also the percentage of employment in different sectors. The data used here for the purpose of analysis have been collected from the available literature. These data are then plotted and analyzed. Through these studies, it is concluded that growth in sectors like agricultural, ready-made garments (RMG), jute industries and fisheries are declining and the business community is not interested in setting up capital-intensive industries. Under this situation, the country needs to explore other business opportunities for a higher economic growth rate. Knowledge can substitute the physical resource. Since the country consists of the large youth population, higher education will play a key role in economic development. It now needs graduates with higher-order skills with innovative quality. Such dispositions demand changes in a university’s curriculum, teaching and assessment method which will function young generations as active learners and creators. By bringing these changes in higher education, a knowledge-based society can be created. The application of such knowledge and creativity will then become the commodity of Bangladesh which will help to reach its goal as an upper middle-income country.Keywords: Bangladesh, economic sectors, economic growth, higher education, knowledge-based economy, massifcation of higher education, teaching and learning, universities’ role in society
Procedia PDF Downloads 1681000 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 99999 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory
Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short
Abstract:
Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling
Procedia PDF Downloads 174998 An Integrated Multisensor/Modeling Approach Addressing Climate Related Extreme Events
Authors: H. M. El-Askary, S. A. Abd El-Mawla, M. Allali, M. M. El-Hattab, M. El-Raey, A. M. Farahat, M. Kafatos, S. Nickovic, S. K. Park, A. K. Prasad, C. Rakovski, W. Sprigg, D. Struppa, A. Vukovic
Abstract:
A clear distinction between weather and climate is a necessity because while they are closely related, there are still important differences. Climate change is identified when we compute the statistics of the observed changes in weather over space and time. In this work we will show how the changing climate contribute to the frequency, magnitude and extent of different extreme events using a multi sensor approach with some synergistic modeling activities. We are exploring satellite observations of dust over North Africa, Gulf Region and the Indo Gangetic basin as well as dust versus anthropogenic pollution events over the Delta region in Egypt and Seoul through remote sensing and utilize the behavior of the dust and haze on the aerosol optical properties. Dust impact on the retreat of the glaciers in the Himalayas is also presented. In this study we also focus on the identification and monitoring of a massive dust plume that blew off the western coast of Africa towards the Atlantic on October 8th, 2012 right before the development of Hurricane Sandy. There is evidence that dust aerosols played a non-trivial role in the cyclogenesis process of Sandy. Moreover, a special dust event "An American Haboob" in Arizona is discussed as it was predicted hours in advance because of the great improvement we have in numerical, land–atmosphere modeling, computing power and remote sensing of dust events. Therefore we performed a full numerical simulation to that event using the coupled atmospheric-dust model NMME–DREAM after generating a mask of the potentially dust productive regions using land cover and vegetation data obtained from satellites. Climate change also contributes to the deterioration of different marine habitats. In that regard we are also presenting some work dealing with change detection analysis of Marine Habitats over the city of Hurghada, Red Sea, Egypt. The motivation for this work came from the fact that coral reefs at Hurghada have undergone significant decline. They are damaged, displaced, polluted, stepped on, and blasted off, in addition to the effects of climate change on the reefs. One of the most pressing issues affecting reef health is mass coral bleaching that result from an interaction between human activities and climatic changes. Over another location, namely California, we have observed that it exhibits highly-variable amounts of precipitation across many timescales, from the hourly to the climate timescale. Frequently, heavy precipitation occurs, causing damage to property and life (floods, landslides, etc.). These extreme events, variability, and the lack of good, medium to long-range predictability of precipitation are already a challenge to those who manage wetlands, coastal infrastructure, agriculture and fresh water supply. Adding on to the current challenges for long-range planning is climate change issue. It is known that La Niña and El Niño affect precipitation patterns, which in turn are entwined with global climate patterns. We have studied ENSO impact on precipitation variability over different climate divisions in California. On the other hand the Nile Delta has experienced lately an increase in the underground water table as well as water logging, bogging and soil salinization. Those impacts would pose a major threat to the Delta region inheritance and existing communities. There has been an undergoing effort to address those vulnerabilities by looking into many adaptation strategies.Keywords: remote sensing, modeling, long range transport, dust storms, North Africa, Gulf Region, India, California, climate extremes, sea level rise, coral reefs
Procedia PDF Downloads 488997 Tobacco Taxation and the Heterogeneity of Smokers' Responses to Price Increases
Authors: Simone Tedeschi, Francesco Crespi, Paolo Liberati, Massimo Paradiso, Antonio Sciala
Abstract:
This paper aims at contributing to the understanding of smokers’ responses to cigarette prices increases with a focus on heterogeneity, both across individuals and price levels. To do this, a stated preference quasi-experimental design grounded in a random utility framework is proposed to evaluate the effect on smokers’ utility of the price level and variation, along with social conditioning and health impact perception. The analysis is based on individual-level data drawn from a unique survey gathering very detailed information on Italian smokers’ habits. In particular, qualitative information on the individual reactions triggered by changes in prices of different magnitude and composition are exploited. The main findings stemming from the analysis are the following; the average price elasticity of cigarette consumption is comparable with previous estimates for advanced economies (-.32). However, the decomposition of this result across five latent-classes of smokers, reveals extreme heterogeneity in terms of price responsiveness, implying a potential price elasticity that ranges between 0.05 to almost 1. Such heterogeneity is in part explained by observable characteristics such as age, income, gender, education as well as (current and lagged) smoking intensity. Moreover, price responsiveness is far from being independent from the size of the prospected price increase. Finally, by comparing even and uneven price variations, it is shown that uniform across-brand price increases are able to limit the scope of product substitutions and downgrade. Estimated price-response heterogeneity has significant implications for tax policy. Among them, first, it provides evidence and a rationale for why the aggregate price elasticity is likely to follow a strictly increasing pattern as a function of the experienced price variation. This information is crucial for forecasting the effect of a given tax-driven price change on tax revenue. Second, it provides some guidance on how to design excise tax reforms to balance public health and revenue goals.Keywords: smoking behaviour, preference heterogeneity, price responsiveness, cigarette taxation, random utility models
Procedia PDF Downloads 162996 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 109995 Isolation and Characterization of the First Known Inhibitor Cystine Knot Peptide in Sea Anemone: Inhibitory Activity on Acid-Sensing Ion Channels
Authors: Armando A. Rodríguez, Emilio Salceda, Anoland Garateix, André J. Zaharenko, Steve Peigneur, Omar López, Tirso Pons, Michael Richardson, Maylín Díaz, Yasnay Hernández, Ludger Ständker, Jan Tytgat, Enrique Soto
Abstract:
Acid-sensing ion channels are cation (Na+) channels activated by a pH drop. These proteins belong to the ENaC/degenerin superfamily of sodium channels. ASICs are involved in sensory perception, synaptic plasticity, learning, memory formation, cell migration and proliferation, nociception, and neurodegenerative disorders, among other processes; therefore those molecules that specifically target these channels are of growing pharmacological and biomedical interest. Sea anemones produce a large variety of ion channels peptide toxins; however, those acting on ligand-gated ion channels, such as Glu-gated, Ach-gated ion channels, and acid-sensing ion channels (ASICs), remain barely explored. The peptide PhcrTx1 is the first compound characterized from the sea anemone Phymanthus crucifer, and it constitutes a novel ASIC inhibitor. This peptide was purified by chromatographic techniques and pharmacologically characterized on acid-sensing ion channels of mammalian neurons using patch-clamp techniques. PhcrTx1 inhibited ASIC currents with an IC50 of 100 nM. Edman degradation yielded a sequence of 32 amino acids residues, with a molecular mass of 3477 Da by MALDI-TOF. No similarity to known sea anemone peptides was found in protein databases. The computational analysis of Cys-pattern and secondary structure arrangement suggested that this is a structurally ICK (Inhibitor Cystine Knot)-type peptide, a scaffold that had not been found in sea anemones but in other venomous organisms. These results show that PhcrTx1 represents the first member of a new structural group of sea anemones toxins acting on ASICs. Also, this peptide constitutes a novel template for the development of drugs against pathologies related to ASICs function.Keywords: animal toxin, inhibitor cystine knot, ion channel, sea anemone
Procedia PDF Downloads 310994 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 168