Search results for: time domain reflectometry (TDR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19380

Search results for: time domain reflectometry (TDR)

14130 Kinetic Modelling of Drying Process of Jumbo Squid (Dosidicus Gigas) Slices Subjected to an Osmotic Pretreatment under High Pressure

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Constanza Olivares-Rivera, Fernanda Marin-Monardez

Abstract:

This research presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of jumbo squid (Dosidicus gigas) cubes. The drying time was reduced to 2 hours at 60ºC and 5 hours at 40°C as compared to the jumbo squid samples untreated. This one was due to osmotic pressure under high-pressure treatment where increased salt saturation what caused an increasing water loss. Thus, a more reduced time during convective drying was reached, and so water effective diffusion in drying would play an important role in this research. Different working conditions such as pressure (350-550 MPa), pressure time (5-10 min), salt concentration, NaCl (10 y 15%) and drying temperature (40-60ºC) were optimized according to kinetic parameters of each mathematical model. The models used for drying experimental curves were those corresponding to Weibull, Page and Logarithmic models, however, the latest one was the best fitted to the experimental data. The values for water effective diffusivity varied from 4.82 to 6.59x10-9 m2/s for the 16 curves (DO+HHP) whereas the control samples obtained a value of 1.76 and 5.16×10-9 m2/s, for 40 and 60°C, respectively. On the other hand, quality characteristics such as color, texture, non-enzymatic browning, water holding capacity (WHC) and rehydration capacity (RC) were assessed. The L* (lightness) color parameter increased, however, b * (yellowish) and a* (reddish) parameters decreased for the DO+HHP treated samples, indicating treatment prevents sample browning. The texture parameters such as hardness and elasticity decreased, but chewiness increased with treatment, which resulted in a product with a higher tenderness and less firmness compared to the untreated sample. Finally, WHC and RC values of the most treatments increased owing to a minor damage in tissue cellular compared to untreated samples. Therefore, a knowledge regarding to the drying kinetic as well as quality characteristics of dried jumbo squid samples subjected to a pretreatment of osmotic dehydration under high hydrostatic pressure is extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.

Keywords: diffusion coefficient, drying process, high pressure, jumbo squid, modelling, quality aspects

Procedia PDF Downloads 245
14129 Time Integrated Measurements of Radon and Thoron Progeny Concentration in Various Dwellings of Bathinda District of Punjab Using Deposition Based Progeny Sensors

Authors: Kirandeep Kaur, Rohit Mehra, Pargin Bangotra

Abstract:

Radon and thoron are pervasive radioactive gases and so are their progenies. The progenies of radon and thoron are present in the indoor atmosphere as attached/unattached fractions. In the present work, seasonal variation of concentration of attached and total (attached + unattached) nanosized decay products of indoor radon and thoron has been studied in the dwellings of Bathinda District of Punjab using Deposition based progeny sensors over long integrated times, which are independent of air turbulence. The preliminary results of these measurements are reported particularly regarding DTPS (Direct Thoron Progeny Sensor) and DRPS (Direct Radon Progeny Sensor) for the first time in Bathinda. It has been observed that there is a strong linear relationship in total EERC (Equilibrium Equivalent Radon Concentration) and EETC (Equilibrium Equivalent Thoron Concentration) in rainy season (R2 = 0.83). Further a strong linear relation between total indoor radon concentration and attached fraction has also been observed for the same rainy season (R2= 0.91). The concentration of attached progeny of radon (EERCatt) is 76.3 % of the total Equilibrium Equivalent Radon Concentration (EERC).

Keywords: radon, thoron, progeny, DTPS/DRPS, EERC, EETC, seasonal variation

Procedia PDF Downloads 417
14128 Life Table and Functional Response of Scolothrips takahashii (Thysanoptera: Thripidae) on Tetranychus urticae (Acari:Tetranychidae)

Authors: Kuang-Chi Pan, Shu-Jen Tuan

Abstract:

Scolothrips takahashii Priesner (Thysanoptera: Thripidae) is a common predatory thrips which feeds on spider mites; it is considered an important natural enemy and a potential biological control agent against spider mites. In order to evaluate the efficacy of S. takahashii against tetranychid mites, life table and functional response study were conducted at 25±1°C, with Tetranychus urticae Priesner as prey. The intrinsic rate of increase (r), finite rate of increase (λ), net reproduction rate (R₀), mean generation time (T) were 0.1674 d⁻¹, 1.1822d⁻¹, 62.26 offspring/individual, and 24.68d. The net consumption rate (C₀) was 846.15, mean daily consumption rate was 51.92 eggs for females and 19.28 eggs for males. S. takahashii exhibited type III functional response when offered T. urticae deutonymphs. Based on the random predator equation, the estimated maximum attack rate (a) and handling time (Th) were 0.1376h⁻¹ and 0.7883h. In addition, a life table experiment was conducted to evaluate the offspring sex allocation and population dynamic of Tetranychus ludeni Zacher under group-rearing conditions with different sex ratios. All bisexual groups produced offspring with similar sex allocation patterns, which started with the majority of females, then transited during the middle of the oviposition period and turned male-biased at the end of the oviposition period.

Keywords: Scolothrips takahashii, Tetranychus urticae, Tetranychus ludeni, two-sex life table, functional response, sex allocation

Procedia PDF Downloads 90
14127 A Spatial Approach to Model Mortality Rates

Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang

Abstract:

Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.

Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection

Procedia PDF Downloads 171
14126 Comparative Study of Non-Identical Firearms with Priority to Repair Subject to Inspection

Authors: A. S. Grewal, R. S. Sangwan, Dharambir, Vikas Dhanda

Abstract:

The purpose of this paper is to develop and analyze two reliability models for a system of non-identical firearms – one is standard firearm (called as original unit) and the other is a country-made firearm (called as duplicate /substandard unit). There is a single server who comes immediately to do inspection and repair whenever needed. On the failure of standard firearm, the server inspects the operative country-made firearm to see whether the unit is capable of performing the desired function well or not. If country-made firearm is not capable to do so, the operation of the system is stopped and server starts repair of the standard firearms immediately. However, no inspection is done at the failure of the country-made firearm as the country-made firearm alone is capable of performing the given task well. In model I, priority to repair the standard firearm is given in case system fails completely and country-made firearm is already under repair, whereas in model II there is no such priority. The failure and repair times of each unit are assumed to be independent and uncorrelated random variables. The distributions of failure time of the units are taken as negative exponential while that of repair and inspection times are general. By using semi-Markov process and regenerative point technique some econo-reliability measures are obtained. Graphs are plotted to compare the MTSF (mean time to system failure), availability and profit of the models for a particular case.

Keywords: non-identical firearms, inspection, priority to repair, semi-Markov process, regenerative point

Procedia PDF Downloads 426
14125 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK

Authors: Esther Ekundayo

Abstract:

The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.

Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology

Procedia PDF Downloads 97
14124 Integrated Lateral Flow Electrochemical Strip for Leptospirosis Diagnosis

Authors: Wanwisa Deenin, Abdulhadee Yakoh, Chahya Kreangkaiwal, Orawon Chailapakul, Kanitha Patarakul, Sudkate Chaiyo

Abstract:

LipL32 is an outer membrane protein present only on pathogenic Leptospira species, which are the causative agent of leptospirosis. Leptospirosis symptoms are often misdiagnosed with other febrile illnesses as the clinical manifestations are non-specific. Therefore, an accurate diagnostic tool for leptospirosis is indeed critical for proper and prompt treatment. Typical diagnosis via serological assays is generally performed to assess the antibodies produced against Leptospira. However, their delayed antibody response and complicated procedure are undoubtedly limited the practical utilization especially in primary care setting. Here, we demonstrate for the first time an early-stage detection of LipL32 by an integrated lateral-flow immunoassay with electrochemical readout (eLFIA). A ferrocene trace tag was monitored via differential pulse voltammetry operated on a smartphone-based device, thus allowing for on-field testing. Superior performance in terms of the lowest detectable limit of detection (LOD) of 8.53 pg/mL and broad linear dynamic range (5 orders of magnitude) among other sensors available thus far was established. Additionally, the developed test strip provided a straightforward yet sensitive approach for diagnosis of leptospirosis using the collected human sera from patients, in which the results were comparable to the real-time polymerase chain reaction technique.

Keywords: leptospirosis, electrochemical detection, lateral flow immunosensor, point-of-care testing, early-stage detection

Procedia PDF Downloads 93
14123 Transperineal Repair Is Ideal for the Management of Rectocele with Faecal Incontinence

Authors: Tia Morosin, Marie Shella De Robles

Abstract:

Rectocele may be associated with symptoms of both obstructed defecation and faecal incontinence. Currently, numerous operative techniques exist to treat patients with rectocele; however, no single technique has emerged as the optimal approach in patients with post-partum faecal incontinence. The purpose of this study was to evaluate the clinical outcome in a consecutive series of patients who underwent transperineal repair of rectocele for patients presenting with faecal incontinence as the predominant symptom. Twenty-three consecutive patients from April 2000 to July 2015 with symptomatic rectocele underwent transperineal repair by a single surgeon. All patients had a history of vaginal delivery, with or without evidence of associated anal sphincter injury at the time. The median age of the cohort was 53 years (range 21 to 90 years). The median operating time and length of hospital stay were 2 hours and 7 days, respectively. Two patients developed urinary retention post-operatively, which required temporary bladder catheterization. One patient had wound dehiscence, which was managed by absorbent dressing applied by the patient and her carer. There was no operative mortality. In all patients with rectocele, there was a concomitant anal sphincter disruption. All patients had satisfactory improvement with regard to faecal incontinence on follow-up. This study suggests this method provides excellent anatomic and physiologic results with minimal morbidity. However, because none of the patients gained full continence postoperatively, pelvic floor rehabilitation might be also needed to achieve better sphincter function in patients with incontinence.

Keywords: anal sphincter defect, faecal incontinence, rectocele, transperineal repair

Procedia PDF Downloads 127
14122 Clinical, Bacteriological and Histopathological Aspects of First-Time Pyoderma in a Population of Iranian Domestic Dogs: A Retrospective Study (2012-2017)

Authors: Shaghayegh Rafatpanah, Mehrnaz Rad, Ahmad Reza Movassaghi, Javad Khoshnegah

Abstract:

The purpose of the present study was to investigate the prevalence of isolation, antimicrobial susceptibility and ERIC-PCR typing of staphylococci species from dogs with pyoderma. The study animals were 61 clinical cases of Iranian domestic dogs with the first-time pyoderma. The prevalence of pyoderma was significantly higher amongst adult (odds Ratio: 0.21; p=0.001) large breed (odds Ratio: 2.42; p=0.002)dogs. There was no difference in prevalence of pyoderma in male and females (odds Ratio: 1.27; p= 0.337). The 'head, face and pinna' and 'trunk' were the most affected lesion regions, each with 19 cases (26.76%). An identifiable underlying disease was present in 52 (85.24%) of the dogs. Bacterial species were recovered from 43 of the 61 (70.49%) studied animals. No isolates were recovered from 18 studied dogs. The most frequently recovered bacterial genus was Staphylococcus (32/43 isolates, 74.41%) including S. epidermidis (22/43 isolates, 51.16%), S. aureus (7/43 isolates, 16.27%) and S. pseudintermedius (3/43 isolates, 6.97%). Staphylococci species resistance was most commonly seen against amoxicillin (94.11%), penicillin (83.35%), and ampicillin (76.47%). Resistant to cephalexin and cefoxitin was 5.88% and 2.94%, respectively. A total of 27 of the staphylococci isolated (84.37 %) were resistant to at least one antimicrobial agent, and 19 isolates (59.37%) were resistant to three or more antimicrobial drugs. There were no significant differences in the prevalence of resistance between the staphylococci isolated from cases of superficial and deep pyoderma. ERIC-PCR results revealed 19 different patterns among 22 isolates of S. epidermidis and 7 isolates of S. aureus.

Keywords: dog, pyoderma, Staphylococcus, Staphylococcus epidermidis, Iran

Procedia PDF Downloads 180
14121 Disaster Management Approach for Planning an Early Response to Earthquakes in Urban Areas

Authors: Luis Reynaldo Mota-Santiago, Angélica Lozano

Abstract:

Determining appropriate measures to face earthquakesarea challenge for practitioners. In the literature, some analyses consider disaster scenarios, disregarding some important field characteristics. Sometimes, software that allows estimating the number of victims and infrastructure damages is used. Other times historical information of previous events is used, or the scenarios’informationis assumed to be available even if it isnot usual in practice. Humanitarian operations start immediately after an earthquake strikes, and the first hours in relief efforts are important; local efforts are critical to assess the situation and deliver relief supplies to the victims. A preparation action is prepositioning stockpiles, most of them at central warehouses placed away from damage-prone areas, which requires large size facilities and budget. Usually, decisions in the first 12 hours (standard relief time (SRT)) after the disaster are the location of temporary depots and the design of distribution paths. The motivation for this research was the delay in the reaction time of the early relief efforts generating the late arrival of aid to some areas after the Mexico City 7.1 magnitude earthquake in 2017. Hence, a preparation approach for planning the immediate response to earthquake disasters is proposed, intended for local governments, considering their capabilities for planning and for responding during the SRT, in order to reduce the start-up time of immediate response operations in urban areas. The first steps are the generation and analysis of disaster scenarios, which allow estimatethe relief demand before and in the early hours after an earthquake. The scenarios can be based on historical data and/or the seismic hazard analysis of an Atlas of Natural Hazards and Risk as a way to address the limited or null available information.The following steps include the decision processes for: a) locating local depots (places to prepositioning stockpiles)and aid-giving facilities at closer places as possible to risk areas; and b) designing the vehicle paths for aid distribution (from local depots to the aid-giving facilities), which can be used at the beginning of the response actions. This approach allows speeding up the delivery of aid in the early moments of the emergency, which could reduce the suffering of the victims allowing additional time to integrate a broader and more streamlined response (according to new information)from national and international organizations into these efforts. The proposed approachis applied to two case studies in Mexico City. These areas were affectedby the 2017’s earthquake, having limited aid response. The approach generates disaster scenarios in an easy way and plans a faster early response with a short quantity of stockpiles which can be managed in the early hours of the emergency by local governments. Considering long-term storage, the estimated quantities of stockpiles require a limited budget to maintain and a small storage space. These stockpiles are useful also to address a different kind of emergencies in the area.

Keywords: disaster logistics, early response, generation of disaster scenarios, preparation phase

Procedia PDF Downloads 110
14120 Teaching Writing in the Virtual Classroom: Challenges and the Way Forward

Authors: Upeksha Jayasuriya

Abstract:

The sudden transition from onsite to online teaching/learning due to the COVID-19 pandemic called for a need to incorporate feasible as well as effective methods of online teaching in most developing countries like Sri Lanka. The English as a Second Language (ESL) classroom faces specific challenges in this adaptation, and teaching writing can be identified as the most challenging task compared to teaching the other three skills. This study was therefore carried out to explore the challenges of teaching writing online and to provide effective means of overcoming them while taking into consideration the attitudes of students and teachers with regard to learning/teaching English writing via online platforms. A survey questionnaire was distributed (electronically) among 60 students from the University of Colombo, the University of Kelaniya, and The Open University in order to find out the challenges faced by students, while in-depth interviews were conducted with 12 lecturers from the mentioned universities. The findings reveal that the inability to observe students’ writing and to receive real-time feedback discourage students from engaging in writing activities when taught online. It was also discovered that both students and teachers increasingly prefer Google Slides over other platforms such as Padlet, Linoit, and Jam Board as it boosts learner autonomy and student-teacher interaction, which in turn allows real-time formative feedback, observation of student work, and assessment. Accordingly, it can be recommended that teaching writing online can be better facilitated by using interactive platforms such as Google Slides, for it promotes active learning and student engagement in the ESL class.

Keywords: ESL, teaching writing, online teaching, active learning, student engagement

Procedia PDF Downloads 89
14119 Circular Economy in Social Practice in Response to Social Needs: Community Actions Versus Government Policy

Authors: Sai-Kit Choi

Abstract:

While traditional social services heavily depended on Government funding and support, there were always time lag, and resources mismatch with the fast growing and changing social needs. This study aims at investigating the effectiveness of implementing Circular Economy concept in a social service setting with comparison to Government Policy in response to social needs in 3 areas: response time, suitability, and community participation. To investigate the effectiveness of implementing Circular Economy concept in a social service setting, a real service model, a community resources sharing platform, was set up and statistics of the first 6 months’ operation data were used as comparison with traditional social services. Literature review was conducted as a reference basis of traditional social services under Government Policy. Case studies were conducted to provide the qualitative perspectives of the innovative approach. The results suggest that the Circular Economy model showed extraordinarily high level of community participation. In addition, it could utilize community resources in response precisely to the burning social needs. On the other hand, the available resources were unstable when comparing to those services supported by Government funding. The research team concluded that Circular Economy has high potential in applications in social service, especially in certain areas, such as resources sharing platform. Notwithstanding, it should be aware of the stability of resources when the services targeted to support some crucial needs.

Keywords: circular economy, social innovation, community participation, sharing economy, social response

Procedia PDF Downloads 113
14118 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 134
14117 Fighting Competition Stress by Focusing the Psychological Training on the Vigor-Activity Mood States

Authors: Majid Al-Busafi, Alexe Cristina Ioana, Alexe Dan Iulian

Abstract:

The specific competition and pre-competition stress in professional track and field determined an increasing engagement, from a biological and psychological point of view, of the middle distance and long distance runners, to obtain the top performances that would get them to win in a competition. Under these conditions, if the psychological stress is not properly managed, the negative effects can lead to a total drop in self-confidence, and can affect the value, the talent, and the self-trust, which generates an even higher stress. One of the means at our disposal is the psychological training, specially adapted to the athlete's individual characteristics, to the characteristics of the athletic event, or of the competition. This paper aims to highlight certain original aspects regarding the effects of a specific psychological training program on the mood states characterized by psychological activation, vigor, vitality. The subjects were represented by 12 professional middle distance and long distance runners, subjected to an applicative intervention to which they have participated voluntarily, over the course of 6 months (a competition season). The results indicated that The application of a psychological training program, adapted to the track and field competition system, over a period of time characterized by high competition stress, can determine an increase in the states of vigor and psychological activation, at the same time diminishing those moods that have negative effects on the performance, in the middle distance and long distance running events. This conclusion confirms the hypothesis of this research.

Keywords: competition stress, psychological training, track and field, vigor-activity

Procedia PDF Downloads 458
14116 Antihyperglycaemic and Antihyperlipidemic Activities of Pleiogynium timorense Seeds and Identification of Bioactive Compounds

Authors: Ataa A. Said, Elsayed A. Abuotabl, Gehan F. Abdel Raoof, Khaled Y. Mohamed

Abstract:

The aim of this study is to evaluate antihyperglycaemic and antihyperlipidemic activities of Pleiogynium timorense (DC.) Leenh (Anacardiaceae) seeds as well as to isolate and identify the bioactive compounds. Antihyperglycaemic effect was evaluated by measuring the effect of two dose levels (150 and 300 mg/kg) of 70% methanol extract of Pleiogynium timorense seeds on blood glucose level when administered 45 minutes before glucose loading. In addition, the effect of the plant extract on the lipid profile was determined by measuring serum total lipids (TL), total cholesterol (TC), triglycerides (TG), high density lipoprotein cholesterol (HDL-C) and low density lipoprotein cholesterol (LDL-C). Furthermore, the bioactive compounds were isolated and identified by chromatographic and spectrometric methods.The results showed that the methanolic extract of the seeds significantly reduced the levels of blood glucose,(TL), (TC), (TG) and (LDL-C) but no significant effect on (HDL-C) comparing with control group. Furthermore, four phenolic compound were isolated which were identified as; catechin, gallic acid, para methoxy benzaldehyde and pyrogallol which were isolated for the first time from the plant. In addition sulphur -containing compound (sulpholane) was isolated for the first time from the plant and from the family. To our knowledge, this is the first study about antihyperglycaemicand antihyperlipidemic activities of the seeds of Pleiogyniumtimorense and its bioactive compounds. So, the methanolic extract of the seeds of Pleiogynium timorense could be a step towards the development of new antihyperglycaemic and antihyperlipidemic drugs.

Keywords: antihyperglycaemic, bioactive compounds, phenolic, Pleiogynium timorense, seeds

Procedia PDF Downloads 220
14115 Effects of Bipolar Plate Coating Layer on Performance Degradation of High-Temperature Proton Exchange Membrane Fuel Cell

Authors: Chen-Yu Chen, Ping-Hsueh We, Wei-Mon Yan

Abstract:

Over the past few centuries, human requirements for energy have been met by burning fossil fuels. However, exploiting this resource has led to global warming and innumerable environmental issues. Thus, finding alternative solutions to the growing demands for energy has recently been driving the development of low-carbon and even zero-carbon energy sources. Wind power and solar energy are good options but they have the problem of unstable power output due to unpredictable weather conditions. To overcome this problem, a reliable and efficient energy storage sub-system is required in future distributed-power systems. Among all kinds of energy storage technologies, the fuel cell system with hydrogen storage is a promising option because it is suitable for large-scale and long-term energy storage. The high-temperature proton exchange membrane fuel cell (HT-PEMFC) with metallic bipolar plates is a promising fuel cell system because an HT-PEMFC can tolerate a higher CO concentration and the utilization of metallic bipolar plates can reduce the cost of the fuel cell stack. However, the operating life of metallic bipolar plates is a critical issue because of the corrosion phenomenon. As a result, in this work, we try to apply different coating layer on the metal surface and to investigate the protection performance of the coating layers. The tested bipolar plates include uncoated SS304 bipolar plates, titanium nitride (TiN) coated SS304 bipolar plates and chromium nitride (CrN) coated SS304 bipolar plates. The results show that the TiN coated SS304 bipolar plate has the lowest contact resistance and through-plane resistance and has the best cell performance and operating life among all tested bipolar plates. The long-term in-situ fuel cell tests show that the HT-PEMFC with TiN coated SS304 bipolar plates has the lowest performance decay rate. The second lowest is CrN coated SS304 bipolar plate. The uncoated SS304 bipolar plate has the worst performance decay rate. The performance decay rates with TiN coated SS304, CrN coated SS304 and uncoated SS304 bipolar plates are 5.324×10⁻³ % h⁻¹, 4.513×10⁻² % h⁻¹ and 7.870×10⁻² % h⁻¹, respectively. In addition, the EIS results indicate that the uncoated SS304 bipolar plate has the highest growth rate of ohmic resistance. However, the ohmic resistance with the TiN coated SS304 bipolar plates only increases slightly with time. The growth rate of ohmic resistances with TiN coated SS304, CrN coated SS304 and SS304 bipolar plates are 2.85×10⁻³ h⁻¹, 3.56×10⁻³ h⁻¹, and 4.33×10⁻³ h⁻¹, respectively. On the other hand, the charge transfer resistances with these three bipolar plates all increase with time, but the growth rates are all similar. In addition, the effective catalyst surface areas with all bipolar plates do not change significantly with time. Thus, it is inferred that the major reason for the performance degradation is the elevated ohmic resistance with time, which is associated with the corrosion and oxidation phenomena on the surface of the stainless steel bipolar plates.

Keywords: coating layer, high-temperature proton exchange membrane fuel cell, metallic bipolar plate, performance degradation

Procedia PDF Downloads 281
14114 Hydrofracturing for Low Temperature Waxy Reservoirs: Problems and Solutions

Authors: Megh Patel, Arjun Chauhan, Jay Thakkar

Abstract:

Hydrofracturing is the most prominent but at the same time expensive, highly skilled and time consuming well stimulation technique. Due to high cost and skilled labor involved, it is generally carried out as the consummate solution among other well stimulation techniques. Considering today’s global petroleum market, no gaffe or complications could be entertained during fracturing, as it would further hamper the current dwindling economy. The literature would be dealing with the challenges encountered during fracturing low temperature waxy reservoirs and the prominent solutions to overcome such teething troubles. During fracturing treatment for, shallow and high freezing point waxy oil reservoirs, the first line problems are to overcome uncompleted breakdown, uncompleted cleanup of fracturing fluids and cold damages to the formations by injecting cold fluid (fluid at ambient conditions). Injecting fracturing fluids at ambient conditions have the tendency to decrease the near wellbore reservoir temperature below the freezing point of oil reservoir and hence leading to wax deposition around the wellbore thereby hampering the fluid production as well as fracture propagation. To overcome such problems, solutions such as hot fracturing fluid injection, encapsulated heat generating hydraulic fracturing fluid system, and injection of wax inhibitor techniques would be discussed. The paper would also be throwing light on changes in rheological properties occurred during heating fracturing fluids and solutions to deal with it taking economic considerations into account.

Keywords: hydrofracturing, waxy reservoirs, low temperature, viscosity, crosslinkers

Procedia PDF Downloads 258
14113 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: bipartite graph, one-mode projection, clustering, web proxy detection

Procedia PDF Downloads 245
14112 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 184
14111 The Science of Dreaming and Sleep in Selected Charles Dickens' Novels and Letters

Authors: Olga Colbert

Abstract:

The present work examines the representation of dreaming in Charles Dickens’ novels, particularly Oliver Twist. Dickens showed great interest in the science of dreaming and had ample knowledge of the latest dream theories in the Victorian era, as can be seen in his personal correspondence, most notably in his famous letter to Dr. Thomas Stone on 2/2/1851. This essay places Dickens’ personal writings side by side with his novels to elucidate whether the scientific paradigm about dreaming included in the novel is consistent with the current (in Dickens’ time) scientific knowledge, or whether it is anachronistic or visionary (ahead of his time). Oliver Twist is particularly useful because it contains entire passages pondering on the nature of dreaming, enumerating types of common dreams, and taking a stand on the interference of sensory perception during the dreaming state. The author is particularly intrigued by Dickens’ assumption of the commonality and universality of lucid dreaming as revealed in these passages. This essay places popular Victorian dream theories, such as those contained in Robert Macnish’s The Philosophy of Sleep, side by side with recent dream theory, particularly psychophysiologist Stephen LaBerge’s numerous articles and books on the topic of lucid dreaming to see if Dickens deviated in any way from the reigning paradigm of the Victorian era in his representation of dreaming in his novels. While Dickens puts to great narrative use many of the characteristics of dreaming described by leading Victorian theorists, the author of this study argues, however, that Dickens’ most visionary statements derive from his acute observations of his own dreaming experiences.

Keywords: consciousness, Dickens, dreaming, lucid dreaming, Victorian

Procedia PDF Downloads 289
14110 Batch Adsorption Studies for the Removal of Textile Dyes from Aqueous Solution on Three Different Pine Bark

Authors: B. Cheknane, F. Zermane

Abstract:

The main objective of the present study is the valorization of natural raw materials of plant origin for the treatment of textile industry wastewater. Selected bark was: maritime (MP), pinyon (PP) and Aleppo pine (AP) bark. The efficiency of these barks were tested for the removal of three dye; rhodamine B (RhB), Green Malachite (GM) and X Methyl Orange (MO). At the first time we focus to study the different parameters which can influence the adsorption processes such as: nature of the adsorbents, nature of the pollutants (dyes) and the effect of pH. Obtained results reveals that the speed adsorption is strongly influencing by the pH medium and the comparative study show that adsorption is favorable in the acidic medium with amount adsorbed of (Q=40mg/g) for rhodamine B and (Q=46mg/g) for orange methyl. Results of adsorption kinetics reveals that the molecules of GM are adsorbed better (Q=48mg/g) than the molecules of RhB (Q=46mg/g) and methyl orange (Q=18mg/g), with equilibrium time of 6 hours. The results of adsorption isotherms show clearly that the maritime pine bark is the most effective adsorbents with adsorbed amount of (QRhB=200mg/g) and (QMO=88mg/g) followed by pinyon pine (PP) with (QRhB=184mg/g) and (QMO=56mg/g) and finally Aleppo pine (AP) bark with (QRhB=131mg/g) and (QMO= 46mg/g). The different obtained isotherms were modeled using the Langmuir and Freundlich models and according to the adjustment coefficient values R2, the obtained isotherms are well represented by Freundlich model.

Keywords: maritime pine bark (MP), pinyon pine bark (PP), Aleppo pine (AP) bark, adsorption, dyes

Procedia PDF Downloads 319
14109 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 87
14108 Analytical Study: An M-Learning App Reflecting the Factors Affecting Student’s Adoption of M-Learning

Authors: Ahmad Khachan, Ahmet Ozmen

Abstract:

This study aims to introduce a mobile bite-sized learning concept, a mobile application with social networks motivation factors that will encourage students to practice critical thinking, improve analytical skills and learn knowledge sharing. We do not aim to propose another e-learning or distance learning based tool like Moodle and Edmodo; instead, we introduce a mobile learning tool called Interactive M-learning Application. The tool reconstructs and strengthens the bonds between educators and learners and provides a foundation for integrating mobile devices in education. The application allows learners to stay connected all the time, share ideas, ask questions and learn from each other. It is built on Android since the Android has the largest platform share in the world and is dominating the market with 74.45% share in 2018. We have chosen Google-Firebase server for hosting because of flexibility, ease of hosting and real time update capabilities. The proposed m-learning tool was offered to four groups of university students in different majors. An improvement in the relation between the students, the teachers and the academic institution was obvious. Student’s performance got much better added to better analytical and critical skills advancement and moreover a willingness to adopt mobile learning in class. We have also compared our app with another tool in the same class for clarity and reliability of the results. The student’s mobile devices were used in this experimental study for diversity of devices and platform versions.

Keywords: education, engineering, interactive software, undergraduate education

Procedia PDF Downloads 155
14107 Effect of Brewing on the Bioactive Compounds of Coffee

Authors: Ceyda Dadali, Yeşim Elmaci

Abstract:

Coffee was introduced as an economic crop during the fifteenth century; nowadays it is the most important food commodity ranking second after crude oil. Desirable sensory properties make coffee one of the most often consumed and most popular beverages in the world. The coffee preparation method has a significant effect on flavor and composition of coffee brews. Three different extraction methodologies namely decoction, infusion and pressure methods have been used for coffee brew preparation. Each of these methods is related to specific granulation (coffee grind) of coffee powder, water-coffee ratio temperature and brewing time. Coffee is a mixture of 1500 chemical compounds. Chemical composition of coffee highly depends on brewing methods, coffee bean species and roasting time-temperature. Coffee contains a wide number of very important bioactive compounds, such as diterpenes: cafestol and kahweol, alkaloids: caffeine, theobromine and trigonelline, melanoidins, phenolic compounds. The phenolic compounds of coffee include chlorogenic acids (quinyl esters of hidroxycinnamic acids), caffeic, ferulic, p-coumaric acid. In coffee caffeoylquinic acids, feruloylquinic acids and di-caffeoylquinic acids are three main groups of chlorogenic acids constitues 6% -10% of dry weight of coffee. The bioavailability of chlorogenic acids in coffee depends on the absorption and metabolization to biomarkers in individuals. Also, the interaction of coffee polyphenols with other compounds such as dietary proteins affects the biomarkers. Since bioactive composition of coffee depends on brewing methods effect of coffee brewing method on bioactive compounds of coffee will be discussed in this study.

Keywords: bioactive compounds of coffee, biomarkers, coffee brew, effect of brewing

Procedia PDF Downloads 196
14106 Synthesis and Characterisation of Bio-Based Acetals Derived from Eucalyptus Oil

Authors: Kirstin Burger, Paul Watts, Nicole Vorster

Abstract:

Green chemistry focuses on synthesis which has a low negative impact on the environment. This research focuses on synthesizing novel compounds from an all-natural Eucalyptus citriodora oil. Eight novel plasticizer compounds are synthesized and optimized using flow chemistry technology. A precursor to one novel compound can be synthesized from the lauric acid present in coconut oil. Key parameters, such as catalyst screening and loading, reaction time, temperature, residence time using flow chemistry techniques is investigated. The compounds are characterised using GC-MS, FT-IR, 1H and 13C-NMR techniques, X-ray crystallography. The efficiency of the compounds is compared to two commercial plasticizers, i.e. Dibutyl phthalate and Eastman 168. Several PVC-plasticized film formulations are produced using the bio-based novel compounds. Tensile strength, stress at fracture and percentage elongation are tested. The property of having increasing plasticizer percentage in the film formulations is investigated, ranging from 3, 6, 9 and 12%. The diastereoisomers of each compound are separated and formulated into PVC films, and differences in tensile strength are measured. Leaching tests, flexibility, and change in glass transition temperatures for PVC-plasticized films is recorded. Research objective includes using these novel compounds as a green bio-plasticizer alternative in plastic products for infants. The inhibitory effect of the compounds on six pathogens effecting infants are studied, namely; Escherichia coli, Staphylococcus aureus, Shigella sonnei, Pseudomonas putida, Salmonella choleraesuis and Klebsiella oxytoca.

Keywords: bio-based compounds, plasticizer, tensile strength, microbiological inhibition , synthesis

Procedia PDF Downloads 186
14105 Sphingosomes: Potential Anti-Cancer Vectors for the Delivery of Doxorubicin

Authors: Brajesh Tiwari, Yuvraj Dangi, Abhishek Jain, Ashok Jain

Abstract:

The purpose of the investigation was to evaluate the potential of sphingosomes as nanoscale drug delivery units for site-specific delivery of anti-cancer agents. Doxorubicin Hydrochloride (DOX) was selected as a model anti-cancer agent. Sphingosomes were prepared and loaded with DOX and optimized for size and drug loading. The formulations were characterized by Malvern zeta-seizer and Transmission Electron Microscopy (TEM) studies. Sphingosomal formulations were further evaluated for in-vitro drug release study under various pH profiles. The in-vitro drug release study showed an initial rapid release of the drug followed by a slow controlled release. In vivo studies of optimized formulations and free drug were performed on albino rats for comparison of drug plasma concentration. The in- vivo study revealed that the prepared system enabled DOX to have had enhanced circulation time, longer half-life and lower elimination rate kinetics as compared to free drug. Further, it can be interpreted that the formulation would selectively enter highly porous mass of tumor cells and at the same time spare normal tissues. To summarize, the use of sphingosomes as carriers of anti-cancer drugs may prove to be a fascinating approach that would selectively localize in the tumor mass, increasing the therapeutic margin of safety while reducing the side effects associated with anti-cancer agents.

Keywords: sphingosomes, anti-cancer, doxorubicin, formulation

Procedia PDF Downloads 303
14104 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions

Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini

Abstract:

This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.

Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing

Procedia PDF Downloads 146
14103 Factors Affecting Early Antibiotic Delivery in Open Tibial Shaft Fractures

Authors: William Elnemer, Nauman Hussain, Samir Al-Ali, Henry Shu, Diane Ghanem, Babar Shafiq

Abstract:

Introduction: The incidence of infection in open tibial shaft injuries varies depending on the severity of the injury, with rates ranging from 1.8% for Gustilo-Anderson type I to 42.9% for type IIIB fractures. The timely administration of antibiotics upon presentation to the emergency department (ED) is an essential component of fracture management, and evidence indicates that prompt delivery of antibiotics is associated with improved outcomes. The objective of this study is to identify factors that contribute to the expedient administration of antibiotics. Methods: This is a retrospective study of open tibial shaft fractures at an academic Level I trauma center. Current Procedural Terminology (CPT) codes identified all patients treated for open tibial shaft fractures between 2015 and 2021. Open fractures were identified by reviewing ED and provider notes, and with ballistic fractures were considered open. Chart reviews were performed to extract demographics, fracture characteristics, postoperative outcomes, time to operative room, time to antibiotic order, and delivery. Univariate statistical analysis compared patients who received early antibiotics (EA), which were delivered within one hour of ED presentation, and those who received late antibiotics (LA), which were delivered outside of one hour of ED presentation. A multivariate analysis was performed to investigate patient, fracture, and transport/ED characteristics contributing to faster delivery of antibiotics. The multivariate analysis included the dependent variables: ballistic fracture, activation of Delta Trauma, Gustilo-Andersen (Type III vs. Type I and II), AO-OTA Classification (Type C vs. Type A and B), arrival between 7 am and 11 pm, and arrival via Emergency Medical Services (EMS) or walk-in. Results: Seventy ED patients with open tibial shaft fractures were identified. Of these, 39 patients (55.7%) received EA, while 31 patients (44.3%) received LA. Univariate analysis shows that the arrival via EMS as opposed to walk-in (97.4% vs. 74.2%, respectively, p = 0.01) and activation of Delta Trauma (89.7% vs. 51.6%, respectively, p < 0.001) was significantly higher in the EA group vs. the LA group. Additionally, EA cases had significantly shorter intervals between the antibiotic order and delivery when compared to LA cases (0.02 hours vs. 0.35 hours, p = 0.007). No other significant differences were found in terms of postoperative outcomes or fracture characteristics. Multivariate analysis shows that a Delta Trauma Response, arrival via EMS, and presentation between 7 am and 11 pm were independent predictors of a shorter time to antibiotic administration (Odds Ratio = 11.9, 30.7, and 5.4, p = 0.001, 0.016, and 0.013, respectively). Discussion: Earlier antibiotic delivery is associated with arrival to the ED between 7 am and 11 pm, arrival via EMS, and a coordinated Delta Trauma activation. Our findings indicate that in cases where administering antibiotics is critical to achieving positive outcomes, it is advisable to employ a coordinated Delta Trauma response. Hospital personnel should be attentive to the rapid administration of antibiotics to patients with open fractures who arrive via walk-in or during late-night hours.

Keywords: antibiotics, emergency department, fracture management, open tibial shaft fractures, orthopaedic surgery, time to or, trauma fractures

Procedia PDF Downloads 65
14102 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950

Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson

Abstract:

MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.

Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3

Procedia PDF Downloads 252
14101 System-level Factors, Presidential Coattails and Mass Preferences: Dynamics of Party Nationalization in Contemporary Brazil (1990-2014)

Authors: Kazuma Mizukoshi

Abstract:

Are electoral politics in contemporary Brazil still local in organization and focus? The importance of this question lies in its paradoxical trajectories. First, often coupled with institutional and sociological ‘barriers’ (e.g. the selection and election of candidates relatively loyal to the local party leadership, the predominance of territorialized electoral campaigns, and the resilience of political clientelism), the regionalization of electoral politics has been a viable and practical solution especially for pragmatic politicians in some Latin American countries. On the other hand, some leftist parties that once served as minor opposition forces at the time of foundational or initial elections have certainly expanded vote shares. Some were eventually capable of holding most (if not a majority) legislative seats since the 1990s. Though not yet rigorously demonstrated, theoretically implicit in the rise of leftist parties in legislative elections is the gradual (if not complete) nationalization of electoral support—meaning the growing equality of a party’s vote share across electoral districts and its change over time. This study will develop four hypotheses to explain the dynamics of party nationalization in contemporary Brazil: district magnitude, ethnic and class fractionalization of each district, voting intentions in federal and state executive elections, and finally the left-right stances of electorates. The study will demonstrate these hypotheses by closely working with the Brazilian Electoral Study (2002-2014).

Keywords: party nationalization, presidential coattails, Left, Brazil

Procedia PDF Downloads 138