Search results for: transmission rate
5821 Parsonage Turner Syndrome PTS, Case Report
Authors: A. M. Bumbea, A. Musetescu, P. Ciurea, A. Bighea
Abstract:
Objectives: The authors present a Parsonage Turner syndrome, a rare disease characterized by onset in apparently healthy person with shoulder and/or arm pain, sensory deficit, motor deficit. The causes are not established, could be determinate by vaccination, postoperative, immunologic disease, post traumatic etc. Methods: The authors present a woman case, 32 years old, (in 2006), no medical history, with arm pain and no other symptom. The onset was sudden with pain at very high level quantified as 10 to a 0 to 10 scale, with no response to classical analgesic and corticoids. The only drugs which can reduce the intensity of pain were oxycodone hydrochloride, 60 mg daily and pregabalinum150 mg daily. After two weeks the intensity of pain was reduced to 5. The patient started a rehabilitation program. After 6 weeks the patient associated sensory and motor deficit. We performed electromyography for upper limb that showed incomplete denervation with reduced neural transmission speed. The patient receives neurotrophic drugs and painkillers for a long period and physical and kinetic therapy. After 6 months the pain was reduced to level 2 and the patient maintained only 150 mg pregabalinum for another 6 months. Then, the evaluation showed no pain but general amiotrophy in upper limb. Results: At the evaluation in 2009, the patient developed a rheumatoid syndrome with tender and swelling joints, but no positive inflammation test, no antibodies or rheumatoid factor. After two years, in 2011 the patient develops an increase of antinuclear antibodies. This context certifies the diagnosis of lupus and the patient receives the specific therapy. Conclusions: This case is not a typical case of onset of lupus with PTS, but the onset of PTS could include the onset of an immune disease.Keywords: lupus, arm pain, patient, swelling
Procedia PDF Downloads 3345820 Effect of a GABA/5-HTP Mixture on Behavioral Changes and Biomodulation in an Invertebrate Model
Authors: Kyungae Jo, Eun Young Kim, Byungsoo Shin, Kwang Soon Shin, Hyung Joo Suh
Abstract:
Gamma-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) are amino acids of digested nutrients or food ingredients and these can possibly be utilized as non-pharmacologic treatment for sleep disorder. We previously investigated the GABA/5-HTP mixture is the principal concept of sleep-promoting and activity-repressing management in nervous system of D. melanogaster. Two experiments in this study were designed to evaluate sleep-promoting effect of GABA/5-HTP mixture, to clarify the possible ratio of sleep-promoting action in the Drosophila invertebrate model system. Behavioral assays were applied to investigate distance traveled, velocity, movement, mobility, turn angle, angular velocity and meander of two amino acids and GABA/5-HTP mixture with caffeine treated flies. In addition, differentially expressed gene (DEG) analyses from next generation sequencing (NGS) were applied to investigate the signaling pathway and functional interaction network of GABA/5-HTP mixture administration. GABA/5-HTP mixture resulted in significant differences between groups related to behavior (p < 0.01) and significantly induced locomotor activity in the awake model (p < 0.05). As a result of the sequencing, the molecular function of various genes has relationship with motor activity and biological regulation. These results showed that GABA/5-HTP mixture administration significantly involved the inhibition of motor behavior. In this regard, we successfully demonstrated that using a GABA/5-HTP mixture modulates locomotor activity to a greater extent than single administration of each amino acid, and that this modulation occurs via the neuronal system, neurotransmitter release cycle and transmission across chemical synapses.Keywords: sleep, γ-aminobutyric acid, 5-hydroxytryptophan, Drosophila melanogaster
Procedia PDF Downloads 3145819 Effects of Supplementation of Nano-Particle Zinc Oxide and Mannan-Oligosaccharide (MOS) on Growth, Feed Utilization, Fatty Acid Profile, Intestinal Morphology, and Hematology in Nile tilapia, Oreochromis niloticus (L.) fry
Authors: Tewodros Abate Alemayehu, Abebe Getahun, Akewake Geremew, Dawit Solomon Demeke, John Recha, Dawit Solomon, Gebremedihin Ambaw, Fasil Dawit Moges
Abstract:
The purpose of this study was to examine the effects of supplementation of zinc oxide (ZnO) nanoparticles and Mannan-oligosaccharide (MOS) on growth performance, feed utilization, fatty acid profiles, hematology, and intestinal morphology of Chamo strain Nile tilapia Oreochromis niloticus (L.) fry reared at optimal temperature (28.62 ± 0.11 ⁰C). Nile tilapia fry (initial weight 1.45 ± 0.01g) were fed basal diet/control diet (Diet-T1), 6 g kg-¹ MOS supplemented diet (Diet-T2), 4 mg ZnO-NPs supplemented diet (Diet-T3), 4 mg ZnO-Bulk supplemented diet (Diet-T4), a combination of 6 g kg-¹ MOS and 4 mg ZnO-Bulk supplemented diet (Diet-T5) and combination of 6 g kg-¹ MOS and 4 mg ZnO-NPs supplemented diet (Diet-T6). Randomly, duplicate aquariums for each diet were assigned and hand-fed to apparent satiation three times daily (08:00, 12:00, and 16:00) for 12 weeks. Fish fed MOS, ZnO-NPs, and a combination of MOS and ZnO-Bulk supplemented diet had higher weight gain, Daily Growth Rate (DGR), and Specific Growth Rate (SGR) than fish fed the basal diet and other feeding groups, although the effect was not significant. According to the GC analysis, Nile tilapia was supplemented with 6 g kg-¹ MOS, 4 mg ZnO-NPs, or a combination of ZnO-NPs, and MOS showed the highest content of EPA, DHA, and higher ratios of PUFA/SFA than other feeding groups. Mean villi length in the proximal and middle portion of the Nile tilapia intestine was affected significantly (p<0.05) by diet. Fish fed Diet-T2 and Diet-T3 had significantly higher villi lengths in the proximal and middle portions of the intestine compared to other feeding groups. The inclusion of additives significantly improved goblet numbers at the proximal, middle, and distal portions of the intestine. Supplementation of additives had also improved some hematological parameters compared with control groups. In conclusion, dietary supplementation of additives MOS and ZnO-NPs could confer benefits on growth performance, fatty acid profiles, hematology, and intestinal morphology of Chamo strain Nile tilapia.Keywords: chamo strain nile tilapia, fatty acid profile, hematology, intestinal morphology, MOS, ZnO-Bulk, ZnO-NPs
Procedia PDF Downloads 825818 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 1085817 Attributable Mortality of Nosocomial Infection: A Nested Case Control Study in Tunisia
Authors: S. Ben Fredj, H. Ghali, M. Ben Rejeb, S. Layouni, S. Khefacha, L. Dhidah, H. Said
Abstract:
Background: The Intensive Care Unit (ICU) provides continuous care and uses a high level of treatment technologies. Although developed country hospitals allocate only 5–10% of beds in critical care areas, approximately 20% of nosocomial infections (NI) occur among patients treated in ICUs. Whereas in the developing countries the situation is still less accurate. The aim of our study is to assess mortality rates in ICUs and to determine its predictive factors. Methods: We carried out a nested case-control study in a 630-beds public tertiary care hospital in Eastern Tunisia. We included in the study all patients hospitalized for more than two days in the surgical or medical ICU during the entire period of the surveillance. Cases were patients who died before ICU discharge, whereas controls were patients who survived to discharge. NIs were diagnosed according to the definitions of ‘Comité Technique des Infections Nosocomiales et les Infections Liées aux Soins’ (CTINLIS, France). Data collection was based on the protocol of Rea-RAISIN 2009 of the National Institute for Health Watch (InVS, France). Results: Overall, 301 patients were enrolled from medical and surgical ICUs. The mean age was 44.8 ± 21.3 years. The crude ICU mortality rate was 20.6% (62/301). It was 35.8% for patients who acquired at least one NI during their stay in ICU and 16.2% for those without any NI, yielding an overall crude excess mortality rate of 19.6% (OR= 2.9, 95% CI, 1.6 to 5.3). The population-attributable fraction due to ICU-NI in patients who died before ICU discharge was 23.46% (95% CI, 13.43%–29.04%). Overall, 62 case-patients were compared to 239 control patients for the final analysis. Case patients and control patients differed by age (p=0,003), simplified acute physiology score II (p < 10-3), NI (p < 10-3), nosocomial pneumonia (p=0.008), infection upon admission (p=0.002), immunosuppression (p=0.006), days of intubation (p < 10-3), tracheostomy (p=0.004), days with urinary catheterization (p < 10-3), days with CVC ( p=0.03), and length of stay in ICU (p=0.003). Multivariate analysis demonstrated 3 factors: age older than 65 years (OR, 5.78 [95% CI, 2.03-16.05] p=0.001), duration of intubation 1-10 days (OR, 6.82 [95% CI, [1.90-24.45] p=0.003), duration of intubation > 10 days (OR, 11.11 [95% CI, [2.85-43.28] p=0.001), duration of CVC 1-7 days (OR, 6.85[95% CI, [1.71-27.45] p=0.007) and duration of CVC > 7 days (OR, 5.55[95% CI, [1.70-18.04] p=0.004). Conclusion: While surveillance provides important baseline data, successful trials with more active intervention protocols, adopting multimodal approach for the prevention of nosocomial infection incited us to think about the feasibility of similar trial in our context. Therefore, the implementation of an efficient infection control strategy is a crucial step to improve the quality of care.Keywords: intensive care unit, mortality, nosocomial infection, risk factors
Procedia PDF Downloads 4115816 Numerical Study of a 6080HP Open Drip Proof (ODP) Motor
Authors: Feng-Hisang Lai
Abstract:
CFD(Computational Fluid Dynamics) is conducted to numerically study the flow and heat transfer features of a two-pole, 6,080HP, 60Hz, 3,150V open drip-proof (ODP) motor. The stator and rotor cores in this high voltage induction motor are segmented with the use of spacers for cooling purposes, which leads to difficulties in meshing when the entire system is to be simulated. The system is divided into 4 parts, meshed separately and then combined using interfaces. The deviation between the CFD and experimental results in temperature and flow rate is less than 10%. The internal flow is further examined and a final design is proposed to reduce the winding temperature by 10 degrees.Keywords: CFD, open drip proof, induction motor, cooling
Procedia PDF Downloads 2015815 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand
Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn
Abstract:
Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.Keywords: recurrent drug, adverse reaction, safety, use of medicine
Procedia PDF Downloads 4615814 In silico Model of Transamination Reaction Mechanism
Authors: Sang-Woo Han, Jong-Shik Shin
Abstract:
w-Transaminase (w-TA) is broadly used for synthesizing chiral amines with a high enantiopurity. However, the reaction mechanism of w-TA has been not well studied, contrary to a-transaminase (a-TA) such as AspTA. Here, we propose in silico model on the reaction mechanism of w-TA. Based on the modeling results which showed large free energy gaps between external aldimine and quinonoid on deamination (or ketimine and quinonoid on amination), withdrawal of Ca-H seemed as a critical step which determines the reaction rate on both amination and deamination reactions, which is consistent with previous researches. Hyperconjugation was also observed in both external aldimine and ketimine which weakens Ca-H bond to elevate Ca-H abstraction.Keywords: computational modeling, reaction intermediates, w-transaminase, in silico model
Procedia PDF Downloads 5515813 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 1145812 Efficient Use of Energy through Incorporation of a Gas Turbine in Methanol Plant
Authors: M. Azadi, N. Tahouni, M. H. Panjeshahi
Abstract:
A techno-economic evaluation for efficient use of energy in a large scale industrial plant of methanol is carried out. This assessment is based on integration of a gas turbine with an existing plant of methanol in which the outlet gas products of exothermic reactor is expanded to power generation. Also, it is decided that methanol production rate is constant through addition of power generation system to the existing methanol plant. Having incorporated a gas turbine with the existing plant, the economic results showed total investment of MUSD 16.9, energy saving of 3.6 MUSD/yr with payback period of approximately 4.7 years.Keywords: energy saving, methanol, gas turbine, power generation
Procedia PDF Downloads 4725811 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 1465810 Microwave-Assisted 3D Porous Graphene for Its Multi-Functionalities
Authors: Jung-Hwan Oh, Rajesh Kumar, Il-Kwon Oh
Abstract:
Porous graphene has extensive potential applications in variety of fields such as hydrogen storage, CO oxidation, gas separation, supercapacitors, fuel cells, nanoelectronics, oil adsorption, and so on. However, the generation of some carbon atoms vacancies for precise small holes have been not extensively studied to prevent the agglomerates of graphene sheets and to obtain porous graphene with high surface area. Recently, many research efforts have been presented to develop physical and chemical synthetic approaches for porous graphene. But physical method has very high cost of manufacture and chemical method consumes so many hours for porous graphene. Herein, we propose a porous graphene contained holes with atomic scale precision by embedding metal nano-particles through microwave irradiation for hydrogen storage and CO oxidation multi- functionalities. This proposed synthetic method is appropriate for fast and convenient production of three dimensional nanostructures, which have nanoholes on the graphene surface in consequence of microwave irradiation. The metal nanoparticles are dispersed quickly on the graphene surface and generated uniform nanoholes on the graphene nanosheets. The morphological and structural characterization of the porous graphene were examined by scanning electron microscopy (SEM), transmission scanning electron microscopy (TEM) and RAMAN spectroscopy, respectively. The metal nanoparticle-embedded porous graphene exhibits a microporous volume of 2.586cm3g-1 with an average pore radius of 0.75 nm. HR-TEM analysis was carried out to further characterize the microstructures. By investigating the RAMAN spectra, we can understand the structural changes of graphene. The results of this work demonstrate a possibility to produce a new class of porous graphene. Furthermore, the newly acquired knowledge for the diffusion into graphene can provide useful guidance for the development of the growth of nanostructure.Keywords: CO oxidation, hydrogen storage, nanocomposites, porous graphene
Procedia PDF Downloads 3765809 CFD Simulation of Forced Convection Nanofluid Heat Transfer in the Automotive Radiator
Authors: Sina Movafagh, Younes Bakhshan
Abstract:
Heat transfer of coolant flow through the automobile radiators is of great importance for the optimization of fuel consumption. In this study, the heat transfer performance of the automobile radiator is evaluated numerically. Different concentrations of nanofluids have been investigated by the addition of Al2O3 nano-particles into the water. Also, the effect of the inlet temperature of nanofluid on the performance of radiator is studied. Results show that with an increase of inlet temperature the outlet temperature and pressure drop along the radiator increase. Also, it has been observed that increase of nono-particle concentration will result in an increase in heat transfer rate within the radiator.Keywords: heat transfer, nanofluid, car radiator, CFD simulation
Procedia PDF Downloads 3085808 Entry Inhibitors Are Less Effective at Preventing Cell-Associated HIV-2 Infection than HIV-1
Authors: A. R. Diniz, P. Borrego, I. Bártolo, N. Taveira
Abstract:
Cell-to-cell transmission plays a critical role in the spread of HIV-1 infection in vitro and in vivo. Inhibition of HIV-1 cell-associated infection by antiretroviral drugs and neutralizing antibodies (NAbs) is more difficult compared to cell-free infection. Limited data exists on cell-associated infection by HIV-2 and its inhibition. In this work, we determined the ability of entry inhibitors to inhibit HIV-1 and HIV-2 cell-to cell fusion as a proxy to cell-associated infection. We developed a method in which Hela-CD4-cells are first transfected with a Tat expressing plasmid (pcDNA3.1+/Tat101) and infected with recombinant vaccinia viruses expressing either the HIV-1 (vPE16: from isolate HTLV-IIIB, clone BH8, X4 tropism) or HIV-2 (vSC50: from HIV-2SBL/ISY, R5 and X4 tropism) envelope glycoproteins (M.O.I.=1 PFU/cell).These cells are added to TZM-bl cells. When cell-to-cell fusion (syncytia) occurs the Tat protein diffuses to the TZM-bl cells activating the expression of a reporter gene (luciferase). We tested several entry inhibitors including the fusion inhibitors T1249, T20 and P3, the CCR5 antagonists MVC and TAK-779, the CXCR4 antagonist AMD3100 and several HIV-2 neutralizing antibodies (Nabs). All compounds inhibited HIV-1 and HIV-2 cell fusion albeit to different levels. Maximum percentage of HIV-2 inhibition (MPI) was higher for fusion inhibitors (T1249- 99.8%; P3- 95%, T20-90%) followed by co-receptor antagonists (MVC- 63%; TAK-779- 55%; AMD3100- 45%). NAbs from HIV-2 infected patients did not prevent cell fusion up to the tested concentration of 4μg/ml. As for HIV-1, MPI reached 100% with TAK-779 and T1249. For the other antivirals, MPIs were: P3-79%; T20-75%; AMD3100-61%; MVC-65%.These results are consistent with published data. Maraviroc had the lowest IC50 both for HIV-2 and HIV-1 (IC50 HIV-2= 0.06 μM; HIV-1=0.0076μM). Highest IC50 were observed with T20 for HIV-2 (3.86μM) and with TAK-779 for HIV-1 (12.64μM). Overall, our results show that entry inhibitors in clinical use are less effective at preventing Env mediated cell-to-cell-fusion in HIV-2 than in HIV-1 which suggests that cell-associated HIV-2 infection will be more difficult to inhibit compared to HIV-1. The method described here will be useful to screen for new HIV entry inhibitors.Keywords: cell-to-cell fusion, entry inhibitors, HIV, NAbs, vaccinia virus
Procedia PDF Downloads 3125807 Comparative Electrochemical Studies of Enzyme-Based and Enzyme-less Graphene Oxide-Based Nanocomposite as Glucose Biosensor
Authors: Chetna Tyagi. G. B. V. S. Lakshmi, Ambuj Tripathi, D. K. Avasthi
Abstract:
Graphene oxide provides a good host matrix for preparing nanocomposites due to the different functional groups attached to its edges and planes. Being biocompatible, it is used in therapeutic applications. As enzyme-based biosensor requires complicated enzyme purification procedure, high fabrication cost and special storage conditions, we need enzyme-less biosensors for use even in a harsh environment like high temperature, varying pH, etc. In this work, we have prepared both enzyme-based and enzyme-less graphene oxide-based biosensors for glucose detection using glucose-oxidase as enzyme and gold nanoparticles, respectively. These samples were characterized using X-ray diffraction, UV-visible spectroscopy, scanning electron microscopy, and transmission electron microscopy to confirm the successful synthesis of the working electrodes. Electrochemical measurements were performed for both the working electrodes using a 3-electrode electrochemical cell. Cyclic voltammetry curves showed the homogeneous transfer of electron on the electrodes in the scan range between -0.2V to 0.6V. The sensing measurements were performed using differential pulse voltammetry for the glucose concentration varying from 0.01 mM to 20 mM, and sensing was improved towards glucose in the presence of gold nanoparticles. Gold nanoparticles in graphene oxide nanocomposite played an important role in sensing glucose in the absence of enzyme, glucose oxidase, as evident from these measurements. The selectivity was tested by measuring the current response of the working electrode towards glucose in the presence of the other common interfering agents like cholesterol, ascorbic acid, citric acid, and urea. The enzyme-less working electrode also showed storage stability for up to 15 weeks, making it a suitable glucose biosensor.Keywords: electrochemical, enzyme-less, glucose, gold nanoparticles, graphene oxide, nanocomposite
Procedia PDF Downloads 1465806 Micro-Milling Process Development of Advanced Materials
Authors: M. A. Hafiz, P. T. Matevenga
Abstract:
Micro-level machining of metals is a developing field which has shown to be a prospective approach to produce features on the parts in the range of a few to a few hundred microns with acceptable machining quality. It is known that the mechanics (i.e. the material removal mechanism) of micro-machining and conventional machining have significant differences due to the scaling effects associated with tool-geometry, tool material and work piece material characteristics. Shape memory alloys (SMAs) are those metal alloys which display two exceptional properties, pseudoelasticity and the shape memory effect (SME). Nickel-titanium (NiTi) alloys are one of those unique metal alloys. NiTi alloys are known to be difficult-to-cut materials specifically by using conventional machining techniques due to their explicit properties. Their high ductility, high amount of strain hardening, and unusual stress–strain behaviour are the main properties accountable for their poor machinability in terms of tool wear and work piece quality. The motivation of this research work was to address the challenges and issues of micro-machining combining with those of machining of NiTi alloy which can affect the desired performance level of machining outputs. To explore the significance of range of cutting conditions on surface roughness and tool wear, machining tests were conducted on NiTi. Influence of different cutting conditions and cutting tools on surface and sub-surface deformation in work piece was investigated. Design of experiments strategy (L9 Array) was applied to determine the key process variables. The dominant cutting parameters were determined by analysis of variance. These findings showed that feed rate was the dominant factor on surface roughness whereas depth of cut found to be dominant factor as far as tool wear was concerned. The lowest surface roughness was achieved at the feed rate of equal to the cutting edge radius where as the lowest flank wear was observed at lowest depth of cut. Repeated machining trials have yet to be carried out in order to observe the tool life, sub-surface deformation and strain induced hardening which are also expecting to be amongst the critical issues in micro machining of NiTi. The machining performance using different cutting fluids and strategies have yet to be studied.Keywords: nickel titanium, micro-machining, surface roughness, machinability
Procedia PDF Downloads 3425805 GynApp: A Mobile Application for the Organization and Control of Gynecological Studies
Authors: Betzabet García-Mendoza, Rocío Abascal-Mena
Abstract:
Breast and cervical cancer are among the leading causes of death of women in Mexico. The mortality rate for these diseases is alarming, even though there have been many campaigns for making people self-aware of the importance of conducting gynecological studies for a timely prevention and detection, these have not been enough. This paper presents a mobile application for organizing and controlling gynecological studies in order to help and boost women to take care of their bodies and health. The process of analyzing and designing the mobile application is presented, along with all the steps carried out by following a user-centered design methodology.Keywords: breast cancer, cervical cancer, gynecological mobile application, paper prototyping, storyboard, women health
Procedia PDF Downloads 3135804 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data
Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez
Abstract:
Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.Keywords: breast cancer, incidence, cancer registries, castilla-la mancha
Procedia PDF Downloads 3135803 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z
Authors: Catarina Cruz, Ana Breda
Abstract:
Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings
Procedia PDF Downloads 1645802 Discrimination during a Resume Audit: The Impact of Job Context in Hiring
Authors: Alexandra Roy
Abstract:
Building on literature on cognitive matching and social categorization and using the correspondence testing method, we test the interaction effect of person characteristics (Gender with physical attractiveness) and job context (client contact, industry status, coworker contact). As expected, while findings show a strong impact of gender with beauty on hiring chances, job context characteristics have also a significant overall effect of this hiring outcome. Moreover, the rate of positive responses varies according some of the recruiter’s characteristics. Results are robust to various sensitivity checks. Implications of the results, limitations of the study, and directions for future research are discussed.Keywords: correspondence testing, discrimination, hiring, physical attractiveness
Procedia PDF Downloads 2125801 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study
Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu
Abstract:
Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm
Procedia PDF Downloads 1415800 The Evaluation of the Impact of Tobacco Heating System and Conventional Cigarette Smoking on Self Reported Oral Symptoms (Dry Mouth, Halitosis, Burning Sensation, Taste Changes) and Salivary Flow Rate: A Cross-sectional Study
Authors: Ella Sever, Irena Glažar, Ema Saltović
Abstract:
Conventional cigarette smoking is associated with an increased risk of oral diseases and oral symptoms such as dry mouth, bad breath, burning sensation, and changes in taste sensation. The harmful effects of conventional cigarette smoking on oral health have been extensively studied previously. However, there is a severe lack of studies investigating the effects of Tobacco Heating System (THS) on oral structures. As a preventive measure, a new alternative Tobacco THS has been developed, and according to the manufacturer, it has fewer potentially harmful and harmful constituents and consequently, lowers the risk of developing tobacco-related diseases. The aim is to analyze the effects of conventional cigarettes and THS on salivary flow rate (SFR), and self-reported oral symptoms.The stratified cross-sectional study included 90 subjects divided into three groups: THS smokers, conventional cigarette smokers, and nonsmokers. The subjects completed questionnaires on smoking habits, and symptoms (dry mouth, bad breath, burning sensation, and changes in taste sensation). SFR test were performed on each subject. The lifetime exposure to smoking was calculated using the Brinkman index (BI). Participants were 20-55 years old (median 31), and 66.67 % were female. The study included three groups of equal size (n = 20), and no statistically significant differences were found between the groups in terms of age (p = 0.632), sex (p = 1.0), and lifetime exposure to smoking (the BI) (p=0,129). Participants from the smoking group had an average of 10 (2-30) years of smoking experience in the conventional cigarettes group and 6 (1-20) years of smoking experience in the THS group. Daily consumption of cigarettes/heets per day was the same for both smokers’ groups (12(2-20) cigarettes/heets per day). The self-reported symptoms were present in 40 % of participants in the smokers group. There were significant differences in the presence of halitosis (p = 0.025) and taste sensation (p=0.013). There were no statistical differences in the presence of dry mouth (p =0.416) and burning sensation (0.7). The SFR differed between groups (p < 0.001) and was significantly lower in the THS and conventional cigarette smokers’ groups than the nonsmokers’ group. There were no significant differences between THS smokers and conventional cigarette smokers. The results of the study show that THS products have a similar effect to conventional cigarettes on oral cavity structures, especially in terms of SFR, self-reported halitosis, and changes in taste.Keywords: oral health, tobacco products, halitosis, cigarette smoking
Procedia PDF Downloads 685799 The Culex Pipiens Niche: Assessment with Climatic and Physiographic Variables via a Geographic Information System
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, João Casaca
Abstract:
Using a geographic information system (GIS), the relations between a georeferenced data set of Culex pipiens sl. mosquitoes collected in Portugal mainland during seven years (2006-2012) and meteorological and physiographic parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures), daily total rainfall, altitude, land use/land cover and proximity to water bodies are evaluated. Focus is on the mosquito females; the characterization of its habitat is the key for the planning of chirurgical non-aggressive prophylactic countermeasures to avoid ambient degradation. The GIS allow for the spatial determination of the zones were the mosquito mean captures has been above average; using the meteorological values at these coordinates, the limits of each parameter are identified/computed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the thresholds obtained for each parameter. The intersection of the maps obtained for each month show the evolution of the area favorable to the species through the mosquito season, which is from May to October at these latitudes. In parallel, mean and above average captures were related to the physiographic parameters. Three levels of risk could be identified for each parameter, using above average captures as an index. The results were applied to the suitability meteorological maps of each month. The Culex pipiens critical niche is delimited, reflecting the critical areas and the level of risk for transmission of the pathogens to which they are competent vectors (West Nile virus, iridoviruses, rheoviruses and parvoviruses).Keywords: Culex pipiens, ecological niche, risk assessment, risk management
Procedia PDF Downloads 5485798 Horizontal Cooperative Game Theory in Hotel Revenue Management
Authors: Ririh Rahma Ratinghayu, Jayu Pramudya, Nur Aini Masruroh, Shi-Woei Lin
Abstract:
This research studies pricing strategy in cooperative setting of hotel duopoly selling perishable product under fixed capacity constraint by using the perspective of managers. In hotel revenue management, competitor’s average room rate and occupancy rate should be taken into manager’s consideration in determining pricing strategy to generate optimum revenue. This information is not provided by business intelligence or available in competitor’s website. Thus, Information Sharing (IS) among players might result in improved performance of pricing strategy. IS is widely adopted in the logistics industry, but IS within hospitality industry has not been well-studied. This research put IS as one of cooperative game schemes, besides Mutual Price Setting (MPS) scheme. In off-peak season, hotel manager arranges pricing strategy to offer promotion package and various kinds of discounts up to 60% of full-price to attract customers. Competitor selling homogenous product will react the same, then triggers a price war. Price war which generates lower revenue may be avoided by creating collaboration in pricing strategy to optimize payoff for both players. In MPS cooperative game, players collaborate to set a room rate applied for both players. Cooperative game may avoid unfavorable players’ payoff caused by price war. Researches on horizontal cooperative game in logistics show better performance and payoff for the players, however, horizontal cooperative game in hotel revenue management has not been demonstrated. This paper aims to develop hotel revenue management models under duopoly cooperative schemes (IS & MPS), which are compared to models under non-cooperative scheme too. Each scheme has five models, Capacity Allocation Model; Demand Model; Revenue Model; Optimal Price Model; and Equilibrium Price Model. Capacity Allocation Model and Demand Model employs self-hotel and competitor’s full and discount price as predictors under non-linear relation. Optimal price is obtained by assuming revenue maximization motive. Equilibrium price is observed by interacting self-hotel’s and competitor’s optimal price under reaction equation. Equilibrium is analyzed using game theory approach. The sequence applies for three schemes. MPS Scheme differently aims to optimize total players’ payoff. The case study in which theoretical models are applied observes two hotels offering homogenous product in Indonesia during a year. The Capacity Allocation, Demand, and Revenue Models are built using multiple regression and statistically tested for validation. Case study data confirms that price behaves within demand model in a non-linear manner. IS Models can represent the actual demand and revenue data better than Non-IS Models. Furthermore, IS enables hotels to earn significantly higher revenue. Thus, duopoly hotel players in general, might have reasonable incentives to share information horizontally. During off-peak season, MPS Models are able to predict the optimal equal price for both hotels. However, Nash equilibrium may not always exist depending on actual payoff of adhering or betraying mutual agreement. To optimize performance, horizontal cooperative game may be chosen over non-cooperative game. Mathematical models can be used to detect collusion among business players. Empirical testing can be used as policy input for market regulator in preventing unethical business practices potentially harming society welfare.Keywords: horizontal cooperative game theory, hotel revenue management, information sharing, mutual price setting
Procedia PDF Downloads 2935797 Technical and Economical Feasibility Analysis of Solar Water Pumping System - Case Study in Iran
Abstract:
The technical analysis of using solar energy and electricity for water pumping in the Khuzestan province in Iran is investigated. For this purpose, the ecological conditions such as the weather data, air clearness and sunshine hours are analyzed. The nature of groundwater in the region was examined in terms of depth, static and dynamic head, water pumping rate. Three configurations for solar water pumping system were studied in this thesis; AC solar water pumping with a storage battery, AC solar water pumping with a storage tank, and DC direct solar water pumping.Keywords: technical and economic feasibility, solar energy, photovoltaic systems, solar water pumping system
Procedia PDF Downloads 5775796 Heat Transfer and Trajectory Models for a Cloud of Spray over a Marine Vessel
Authors: S. R. Dehghani, G. F. Naterer, Y. S. Muzychka
Abstract:
Wave-impact sea spray creates many droplets which form a spray cloud traveling over marine objects same as marine vessels and offshore structures. In cold climates such as Arctic reigns, sea spray icing, which is ice accretion on cold substrates, is strongly dependent on the wave-impact sea spray. The rate of cooling of droplets affects the process of icing that can yield to dry or wet ice accretion. Trajectories of droplets determine the potential places for ice accretion. Combining two models of trajectories and heat transfer for droplets can predict the risk of ice accretion reasonably. The majority of the cooling of droplets is because of droplet evaporations. In this study, a combined model using trajectory and heat transfer evaluate the situation of a cloud of spray from the generation to impingement. The model uses some known geometry and initial information from the previous case studies. The 3D model is solved numerically using a standard numerical scheme. Droplets are generated in various size ranges from 7 mm to 0.07 mm which is a suggested range for sea spray icing. The initial temperature of droplets is considered to be the sea water temperature. Wind velocities are assumed same as that of the field observations. Evaluations are conducted using some important heading angles and wind velocities. The characteristic of size-velocity dependence is used to establish a relation between initial sizes and velocities of droplets. Time intervals are chosen properly to maintain a stable and fast numerical solution. A statistical process is conducted to evaluate the probability of expected occurrences. The medium size droplets can reach the highest heights. Very small and very large droplets are limited to lower heights. Results show that higher initial velocities create the most expanded cloud of spray. Wind velocities affect the extent of the spray cloud. The rate of droplet cooling at the start of spray formation is higher than the rest of the process. This is because of higher relative velocities and also higher temperature differences. The amount of water delivery and overall temperature for some sample surfaces over a marine vessel are calculated. Comparing results and some field observations show that the model works accurately. This model is suggested as a primary model for ice accretion on marine vessels.Keywords: evaporation, sea spray, marine icing, numerical solution, trajectory
Procedia PDF Downloads 2225795 Evaluating the Validity of the Combined Bedside Test in Diagnosing Juvenile Myasthenia Gravis (2012-2024)
Authors: Pechpailin Kortnoi, Tanitnun Paprad
Abstract:
Background: Myasthenia gravis (MG) is an autoimmune disorder characterized by impaired neuromuscular transmission due to antibodies against nicotinic receptors, leading to muscle weakness, ptosis, and respiratory issues. The incidence of MG has risen globally, emphasizing the need for effective diagnostics. Objective: This study evaluates the validity of a combined bedside test (the ice pack test and fatigability test) for diagnosing juvenile myasthenia gravis (JMG) in pediatric patients with ptosis. Methods: This cross-sectional study, conducted from January 2012 to May 2024 at King Chulalongkorn Memorial Hospital, Thailand, included pediatric patients (1 month to 18 years) with ptosis undergoing ice pack and fatigability tests. Data included demographics, clinical findings, and test results. Diagnostic efficacy was assessed using sensitivity, specificity, accuracy, PPV, NPV, Fagan Nomogram, Kappa Statistics, and McNemar’s Chi-Square. Results: Of 43 identified patients, 32 were included, with 47% male and a mean age of 7 years. The combined bedside test had high sensitivity (92.8%) and accuracy (87.5%) but moderate specificity (50%). It significantly outperformed the ice pack test (P = 0.0005), which showed low sensitivity (42.8%) and accuracy (43.8%). The fatigability test had 82% sensitivity and 92% PPV. Confirmatory tests (AChR-Ab, MuSK-Ab, neostigmine, repetitive nerve stimulation) supported most diagnoses. Conclusions: The combined bedside test, with high sensitivity (92.8%) and accuracy (87.5%), is an effective screening tool for juvenile myasthenia gravis, outperforming the ice pack test. Integrating it into clinical practice may improve diagnosis and enable timely treatment. The fatigability test (82% sensitivity) is also useful as an adjunct screening tool.Keywords: myasthenia gravis, the fatigability test, the ice pack test, the combined bedside test
Procedia PDF Downloads 155794 Reaction Kinetics of Biodiesel Production from Refined Cottonseed Oil Using Calcium Oxide
Authors: Ude N. Callistus, Amulu F. Ndidi, Onukwuli D. Okechukwu, Amulu E. Patrick
Abstract:
Power law approximation was used in this study to evaluate the reaction orders of calcium oxide, CaO catalyzed transesterification of refined cottonseed oil and methanol. The kinetics study was carried out at temperatures of 45, 55 and 65 oC. The kinetic parameters such as reaction order 2.02 and rate constant 2.8 hr-1g-1cat, obtained at the temperature of 65 oC best fitted the kinetic model. The activation energy, Ea obtained was 127.744 KJ/mol. The results indicate that the transesterification reaction of the refined cottonseed oil using calcium oxide catalyst is approximately second order reaction.Keywords: refined cottonseed oil, transesterification, CaO, heterogeneous catalysts, kinetic model
Procedia PDF Downloads 5505793 Numerical Simulation of the Remaining Life of Ramshir Bridge over the Karoon River
Authors: M. Jalali Azizpour, V.Tavvaf, E. Akhlaghi, H. Mohammadi Majd, A. Shirani, S. M. Moravvej, M. Kazemi, A. R. Aboudi Asl, A. Jaderi
Abstract:
The static and corrosion behavior of the bridge using for pipelines in the south of country have been evaluated. The bridge was constructed more than 40 years ago on the Karoon River. Mentioned bridge is located in Khuzestan province and at a distance of 15 km east from the suburbs of Ahwaz. In order to determine the mechanical properties, the experimental tools such as measuring the thickness and static simulations based on the actual load were used. In addition, the metallurgical studies were used to achieve a rate of corrosion of pipes in the river and in the river bed. The aim of this project is to determine the remaining life of the bridge using mechanical and metallurgical studies.Keywords: FEM, stress, corrosion, bridge
Procedia PDF Downloads 4785792 Financial Feasibility of Clean Development Mechanism (CDM) Projects in India
Authors: Renuka H. Deshmukh, Snehal Nifadkar, Anil P. Dongre
Abstract:
The research study aims to analyze the financial performance of the companies associated with CDM projects implemented in India from 2001 to 2014 by calculating net profit with and without CDM revenue. Further the study also highlights the Year-wise and sector-wise lending to CDM projects in India as well as in the state of Maharashtra. The study further aims to examine the year-wise trend of Certified Emission Reductions (CER) issued by the CDM projects implemented in Maharashtra from 2001-2014. The study as well analyses the responses of selected corporate with respect to the challenges in implementing and obtaining finance from commercial banks.Keywords: adaptation costs, internal rate of return, mitigation, vulnerability, CER
Procedia PDF Downloads 349