Search results for: prediction interval
2296 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 2242295 Nitrification and Denitrification Kinetic Parameters of a Mature Sanitary Landfill Leachate
Authors: Tânia F. C. V. Silva, Eloísa S. S. Vieira, João Pinto da Costa, Rui A. R. Boaventura, Vitor J. P. Vilar
Abstract:
Sanitary landfill leachates are characterized as a complex mixture of diverse organic and inorganic contaminants, which are usually removed by combining different treatment processes. Due to its simplicity, reliability, high cost-effectiveness and high nitrogen content (mostly under the ammonium form) inherent in this type of effluent, the activated sludge biological process is almost always applied in leachate treatment plants (LTPs). The purpose of this work is to assess the effect of the main nitrification and denitrification variables on the nitrogen's biological removal, from mature leachates. The leachate samples were collected after an aerated lagoon, at a LTP nearby Porto, presenting a high amount of dissolved organic carbon (1.0-1.3 g DOC/L) and ammonium nitrogen (1.1-1.7 g NH4+-N/L). The experiments were carried out in a 1-L lab-scale batch reactor, equipped with a pH, temperature and dissolved oxygen (DO) control system, in order to determine the reaction kinetic constants at unchanging conditions. The nitrification reaction rate was evaluated while varying the (i) operating temperature (15, 20, 25 and 30ºC), (ii) DO concentration interval (0.5-1.0, 1.0-2.0 and 2.0-4.0 mg/L) and (iii) solution pH (not controlled, 7.5-8.5 and 6.5-7.5). At the beginning of most assays, it was verified that the ammonium stripping occurred simultaneously to the nitrification, reaching up to 37% removal of total dissolved nitrogen. The denitrification kinetic constants and the methanol consumptions were calculated for different values of (i) volatile suspended solids (VSS) content (25, 50 and 100 mL of centrifuged sludge in 1 L solution), (ii) pH interval (6.5-7.0, 7.5-8.0 and 8.5-9.0) and (iii) temperature (15, 20, 25 and 30ºC), using effluent previously nitrified. The maximum nitrification rate obtained was 38±2 mg NH4+-N/h/g VSS (25ºC, 0.5-1.0 mg O2/L, pH not controlled), consuming 4.4±0.3 mg CaCO3/mg NH4+-N. The highest denitrification rate achieved was 19±1 mg (NO2--N+NO3--N)/h/g VSS (30ºC, 50 mL of sludge and pH between 7.5 and 8.0), with a C/N consumption ratio of 1.1±0.1 mg CH3OH/mg (NO2--N+NO3--N) and an overall alkalinity production of 3.7±0.3 mg CaCO3/mg (NO2--N+NO3--N). The denitrification process showed to be sensitive to all studied parameters, while the nitrification reaction did not suffered significant change when DO content was changed.Keywords: mature sanitary landfill leachate, nitrogen removal, nitrification and denitrification parameters, lab-scale activated sludge biological reactor
Procedia PDF Downloads 2772294 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 952293 Optimization of a High-Growth Investment Portfolio for the South African Market Using Predictive Analytics
Authors: Mia Françoise
Abstract:
This report aims to develop a strategy for assisting short-term investors to benefit from the current economic climate in South Africa by utilizing technical analysis techniques and predictive analytics. As part of this research, value investing and technical analysis principles will be combined to maximize returns for South African investors while optimizing volatility. As an emerging market, South Africa offers many opportunities for high growth in sectors where other developed countries cannot grow at the same rate. Investing in South African companies with significant growth potential can be extremely rewarding. Although the risk involved is more significant in countries with less developed markets and infrastructure, there is more room for growth in these countries. According to recent research, the offshore market is expected to outperform the local market over the long term; however, short-term investments in the local market will likely be more profitable, as the Johannesburg Stock Exchange is predicted to outperform the S&P500 over the short term. The instabilities in the economy contribute to increased market volatility, which can benefit investors if appropriately utilized. Price prediction and portfolio optimization comprise the two primary components of this methodology. As part of this process, statistics and other predictive modeling techniques will be used to predict the future performance of stocks listed on the Johannesburg Stock Exchange. Following predictive data analysis, Modern Portfolio Theory, based on Markowitz's Mean-Variance Theorem, will be applied to optimize the allocation of assets within an investment portfolio. By combining different assets within an investment portfolio, this optimization method produces a portfolio with an optimal ratio of expected risk to expected return. This methodology aims to provide a short-term investment with a stock portfolio that offers the best risk-to-return profile for stocks listed on the JSE by combining price prediction and portfolio optimization.Keywords: financial stocks, optimized asset allocation, prediction modelling, South Africa
Procedia PDF Downloads 992292 Rupture Termination of the 1950 C. E. Earthquake and Recurrent Interval of Great Earthquake in North Eastern Himalaya, India
Authors: Rao Singh Priyanka, Jayangondaperumal R.
Abstract:
The Himalayan active fault has the potential to generate great earthquakes in the future, posing a biggest existential threat to humans in the Himalayan and adjacent region. Quantitative evaluation of accumulated and released interseismic strain is crucial to assess the magnitude and spatio-temporal variability of future great earthquakes along the Himalayan arc. To mitigate the destruction and hazards associated with such earthquakes, it is important to understand their recurrence cycle. The eastern Himalayan and Indo-Burman plate boundary systems offers an oblique convergence across two orthogonal plate boundaries, resulting in a zone of distributed deformation both within and away from the plate boundary and clockwise rotation of fault-bounded blocks. This seismically active region has poorly documented historical archive of the past large earthquakes. Thus, paleoseismologicalstudies confirm the surface rupture evidences of the great continental earthquakes (Mw ≥ 8) along the Himalayan Frontal Thrust (HFT), which along with the Geodetic studies, collectively provide the crucial information to understand and assess the seismic potential. These investigations reveal the rupture of 3/4th of the HFT during great events since medieval time but with debatable opinions for the timing of events due to unclear evidences, ignorance of transverse segment boundaries, and lack of detail studies. Recent paleoseismological investigations in the eastern Himalaya and Mishmi ranges confirms the primary surface ruptures of the 1950 C.E. great earthquake (M>8). However, a seismic gap exists between the 1714 C.E. and 1950 C.E. Assam earthquakes that did not slip since 1697 C.E. event. Unlike the latest large blind 2015 Gorkha earthquake (Mw 7.8), the 1950 C.E. event is not triggered by a large event of 1947 C.E. that occurred near the western edge of the great upper Assam event. Moreover, the western segment of the eastern Himalayadid not witness any surface breaking earthquake along the HFT for over the past 300 yr. The frontal fault excavations reveal that during the 1950 earthquake, ~3.1-m-high scarp along the HFT was formed due to the co-seismic slip of 5.5 ± 0.7 m at Pasighat in the Eastern Himalaya and a 10-m-high-scarp at a Kamlang Nagar along the Mishmi Thrust in the Eastern Himalayan Syntaxis is an outcome of a dip-slip displacement of 24.6 ± 4.6 m along a 25 ± 5°E dipping fault. This event has ruptured along the two orthogonal fault systems in the form of oblique thrust fault mechanism. Approx. 130 km west of Pasighat site, the Himebasti village has witnessed two earthquakes, the historical 1697 Sadiya earthquake, and the 1950 event, with a cumulative dip-slip displacement of 15.32 ± 4.69 m. At Niglok site, Arunachal Pradesh, a cumulative slip of ~12.82 m during at least three events since pre 19585 B.P. has produced ~6.2-m high scarp while the youngest scarp of ~2.4-m height has been produced during 1697 C.E. The site preserves two deformational events along the eastern HFT, providing an idea of last serial ruptures at an interval of ~850 yearswhile the successive surface rupturing earthquakes lacks in the Mishmi Range to estimate the recurrence cycle.Keywords: paleoseismology, surface rupture, recurrence interval, Eastern Himalaya
Procedia PDF Downloads 842291 A Semantic and Concise Structure to Represent Human Actions
Authors: Tobias Strübing, Fatemeh Ziaeetabar
Abstract:
Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis
Procedia PDF Downloads 1262290 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites
Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler
Abstract:
Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.Keywords: failure, strength, stress concentration, unidirectional composites
Procedia PDF Downloads 1562289 Challenging the Standard 24 Equal Quarter Tones Theory in Arab Music: A Case Study of Tetrachords Bayyātī and ḤIjāz
Authors: Nabil Shair
Abstract:
Arab music maqām (Arab modal framework) is founded, among other main characteristics, on microtonal intervals. Notwithstanding the importance and multifaceted nature of intonation in Arab music, there is a paucity of studies examining this subject based on scientific and quantitative approaches. The present-day theory concerning the Arab tone system is largely based on the pioneering treatise of Mīkhā’īl Mashāqah (1840), which proposes the theoretical division of the octave into 24 equal quarter tones. This kind of equal-tempered division is incompatible with the performance practice of Arab music, as many professional Arab musicians conceptualize additional pitches beyond the standard 24 notes per octave. In this paper, we refute the standard theory presenting the scale of well-tempered quarter tones by implementing a quantitative analysis of the performed intonation of two prominent tetrachords in Arab music, namely bayyātī and ḥijāz. This analysis was conducted with the help of advanced computer programs, such as Sonic Visualiser and Tony, by which we were able to obtain precise frequency data (Hz) of each tone every 0.01 second. As a result, the value (in cents) of all three intervals of each tetrachord was measured and accordingly compared to the theoretical intervals. As a result, a specific distribution of a range of deviation from the equal-tempered division of the octave was detected, especially the detection of a diminished first interval of bayyātí and diminished second interval of ḥijāz. These types of intonation entail a considerable amount of flexibility, mainly influenced by surrounding tones, direction and function of the measured tone, ornaments, text, personal style of the performer and interaction with the audience. This paper seeks to contribute to the existing literature dealing with intonation in Arab music, as it is a vital part of the performance practice of this musical tradition. In addition, the insights offered by this paper and its novel methodology might also contribute to the broadening of the existing pedagogic methods used to teach Arab music.Keywords: Arab music, intonation, performance practice, music theory, oral music, octave division, tetrachords, music of the middle east, music history, musical intervals
Procedia PDF Downloads 552288 Prognosis of Patients with COVID-19 and Hematologic Malignancies
Authors: Elizabeth Behrens, Anne Timmermann, Alexander Yerkan, Joshua Thomas, Deborah Katz, Agne Paner, Melissa Larson, Shivi Jain, Seo-Hyun Kim, Celalettin Ustun, Ankur Varma, Parameswaran Venugopal, Jamile Shammo
Abstract:
Coronavirus Disease-2019 (COVID-19) causes persistent concern for poor outcomes in vulnerable populations. Patients with hematologic malignancies (HM) have been found to have higher COVID-19 case fatality rates compared to those without malignancy. While cytopenias are common in patients with HM, especially in those undergoing chemotherapy treatment, hemoglobin (Hgb) and platelet count have not yet been studied, to our best knowledge, as potential prognostic indicators for patients with HM and COVID-19. The goal of this study is to identify factors that may increase the risk of mortality in patients with HM and COVID-19. In this single-center, retrospective, observational study, 65 patients with HM and laboratory confirmed COVID-19 were identified between March 2020 and January 2021. Information on demographics, laboratory data the day of COVID-19 diagnosis, and prognosis was extracted from the electronic medical record (EMR), chart reviewed, and analyzed using the statistical software SAS version 9.4. Chi-square testing was used for categorical variable analyses. Risk factors associated with mortality were established by logistic regression models. Non-Hodgkin lymphoma (37%), chronic lymphocytic leukemia (20%), and plasma cell dyscrasia (15%) were the most common HM. Higher Hgb level upon COVID-19 diagnosis was related to decreased mortality, odd ratio=0.704 (95% confidence interval [CI]: 0.511-0.969; P = .0263). Platelet count the day of COVID-19 diagnosis was lower in patients who ultimately died (mean 127 ± 72K/uL, n=10) compared to patients who survived (mean 197 ±92K/uL, n=55) (P=.0258). Female sex was related to decreased mortality, odd ratio=0.143 (95% confidence interval [CI]: 0.026-0.785; P = .0353). There was no mortality difference between the patients who were on treatment for HM the day of COVID-19 diagnosis compared to those who were not (P=1.000). Lower Hgb and male sex are independent risk factors associated with increased mortality of HM patients with COVID-19. Clinicians should be especially attentive to patients with HM and COVID-19 who present with cytopenias. Larger multi-center studies are urgently needed to further investigate the impact of anemia, thrombocytopenia, and demographics on outcomes of patients with hematologic malignancies diagnosed with COVID-19.Keywords: anemia, COVID-19, hematologic malignancy, prognosis
Procedia PDF Downloads 1502287 Predicting Stack Overflow Accepted Answers Using Features and Models with Varying Degrees of Complexity
Authors: Osayande Pascal Omondiagbe, Sherlock a Licorish
Abstract:
Stack Overflow is a popular community question and answer portal which is used by practitioners to solve technology-related challenges during software development. Previous studies have shown that this forum is becoming a substitute for official software programming languages documentation. While tools have looked to aid developers by presenting interfaces to explore Stack Overflow, developers often face challenges searching through many possible answers to their questions, and this extends the development time. To this end, researchers have provided ways of predicting acceptable Stack Overflow answers by using various modeling techniques. However, less interest is dedicated to examining the performance and quality of typically used modeling methods, and especially in relation to models’ and features’ complexity. Such insights could be of practical significance to the many practitioners that use Stack Overflow. This study examines the performance and quality of various modeling methods that are used for predicting acceptable answers on Stack Overflow, drawn from 2014, 2015 and 2016. Our findings reveal significant differences in models’ performance and quality given the type of features and complexity of models used. Researchers examining classifiers’ performance and quality and features’ complexity may leverage these findings in selecting suitable techniques when developing prediction models.Keywords: feature selection, modeling and prediction, neural network, random forest, stack overflow
Procedia PDF Downloads 1322286 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study
Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva
Abstract:
Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.Keywords: leak, water shut-off, cement, water leak
Procedia PDF Downloads 1182285 Intra-miR-ExploreR, a Novel Bioinformatics Platform for Integrated Discovery of MiRNA:mRNA Gene Regulatory Networks
Authors: Surajit Bhattacharya, Daniel Veltri, Atit A. Patel, Daniel N. Cox
Abstract:
miRNAs have emerged as key post-transcriptional regulators of gene expression, however identification of biologically-relevant target genes for this epigenetic regulatory mechanism remains a significant challenge. To address this knowledge gap, we have developed a novel tool in R, Intra-miR-ExploreR, that facilitates integrated discovery of miRNA targets by incorporating target databases and novel target prediction algorithms, using statistical methods including Pearson and Distance Correlation on microarray data, to arrive at high confidence intragenic miRNA target predictions. We have explored the efficacy of this tool using Drosophila melanogaster as a model organism for bioinformatics analyses and functional validation. A number of putative targets were obtained which were also validated using qRT-PCR analysis. Additional features of the tool include downloadable text files containing GO analysis from DAVID and Pubmed links of literature related to gene sets. Moreover, we are constructing interaction maps of intragenic miRNAs, using both micro array and RNA-seq data, focusing on neural tissues to uncover regulatory codes via which these molecules regulate gene expression to direct cellular development.Keywords: miRNA, miRNA:mRNA target prediction, statistical methods, miRNA:mRNA interaction network
Procedia PDF Downloads 5132284 Recurrence of Papillary Thyroid Cancer with an Interval of 40 Years. Report of an Autopsy Case
Authors: Satoshi Furukawa, Satomu Morita, Katsuji Nishi, Masahito Hitosugi
Abstract:
A 75-year-old woman took thyroidectomy forty years previously. Enlarged masses were seen at autopsy just above and below the left clavicle. We proved the diagnosis of papillary thyroid cancer (PTC) and lung metastasis by histological examinations. The prognosis of PTC is excellent; the 10-year survival rate ranges between 85 and 99%. Lung metastases may be found in 10% of the patients with PTC. We report an unusual case of recurrence of PTC with metastasis to the lung.Keywords: papillary thyroid cancer, lung metastasis, autopsy, histopathological findings
Procedia PDF Downloads 3412283 A Study on Prediction Model for Thermally Grown Oxide Layer in Thermal Barrier Coating
Authors: Yongseok Kim, Jeong-Min Lee, Hyunwoo Song, Junghan Yun, Jungin Byun, Jae-Mean Koo, Chang-Sung Seok
Abstract:
Thermal barrier coating(TBC) is applied for gas turbine components to protect the components from extremely high temperature condition. Since metallic substrate cannot endure such severe condition of gas turbines, delamination of TBC can cause failure of the system. Thus, delamination life of TBC is one of the most important issues for designing the components operating at high temperature condition. Thermal stress caused by thermally grown oxide(TGO) layer is known as one of the major failure mechanisms of TBC. Thermal stress by TGO mainly occurs at the interface between TGO layer and ceramic top coat layer, and it is strongly influenced by the thickness and shape of TGO layer. In this study, Isothermal oxidation is conducted on coin-type TBC specimens prepared by APS(air plasma spray) method. After the isothermal oxidation at various temperature and time condition, the thickness and shape(rumpling shape) of the TGO is investigated, and the test data is processed by numerical analysis. Finally, the test data is arranged into a mathematical prediction model with two variables(temperature and exposure time) which can predict the thickness and rumpling shape of TGO.Keywords: thermal barrier coating, thermally grown oxide, thermal stress, isothermal oxidation, numerical analysis
Procedia PDF Downloads 3422282 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images
Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang
Abstract:
Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network
Procedia PDF Downloads 952281 Soft Computing Approach for Diagnosis of Lassa Fever
Authors: Roseline Oghogho Osaseri, Osaseri E. I.
Abstract:
Lassa fever is an epidemic hemorrhagic fever caused by the Lassa virus, an extremely virulent arena virus. This highly fatal disorder kills 10% to 50% of its victims, but those who survive its early stages usually recover and acquire immunity to secondary attacks. One of the major challenges in giving proper treatment is lack of fast and accurate diagnosis of the disease due to multiplicity of symptoms associated with the disease which could be similar to other clinical conditions and makes it difficult to diagnose early. This paper proposed an Adaptive Neuro Fuzzy Inference System (ANFIS) for the prediction of Lass Fever. In the design of the diagnostic system, four main attributes were considered as the input parameters and one output parameter for the system. The input parameters are Temperature on admission (TA), White Blood Count (WBC), Proteinuria (P) and Abdominal Pain (AP). Sixty-one percent of the datasets were used in training the system while fifty-nine used in testing. Experimental results from this study gave a reliable and accurate prediction of Lassa fever when compared with clinically confirmed cases. In this study, we have proposed Lassa fever diagnostic system to aid surgeons and medical healthcare practictionals in health care facilities who do not have ready access to Polymerase Chain Reaction (PCR) diagnosis to predict possible Lassa fever infection.Keywords: anfis, lassa fever, medical diagnosis, soft computing
Procedia PDF Downloads 2712280 Power Grid Line Ampacity Forecasting Based on a Long-Short-Term Memory Neural Network
Authors: Xiang-Yao Zheng, Jen-Cheng Wang, Joe-Air Jiang
Abstract:
Improving the line ampacity while using existing power grids is an important issue that electricity dispatchers are now facing. Using the information provided by the dynamic thermal rating (DTR) of transmission lines, an overhead power grid can operate safely. However, dispatchers usually lack real-time DTR information. Thus, this study proposes a long-short-term memory (LSTM)-based method, which is one of the neural network models. The LSTM-based method predicts the DTR of lines using the weather data provided by Central Weather Bureau (CWB) of Taiwan. The possible thermal bottlenecks at different locations along the line and the margin of line ampacity can be real-time determined by the proposed LSTM-based prediction method. A case study that targets the 345 kV power grid of TaiPower in Taiwan is utilized to examine the performance of the proposed method. The simulation results show that the proposed method is useful to provide the information for the smart grid application in the future.Keywords: electricity dispatch, line ampacity prediction, dynamic thermal rating, long-short-term memory neural network, smart grid
Procedia PDF Downloads 2842279 Analyzing the Performance of Machine Learning Models to Predict Alzheimer's Disease and its Stages Addressing Missing Value Problem
Authors: Carlos Theran, Yohn Parra Bautista, Victor Adankai, Richard Alo, Jimwi Liu, Clement G. Yedjou
Abstract:
Alzheimer's disease (AD) is a neurodegenerative disorder primarily characterized by deteriorating cognitive functions. AD has gained relevant attention in the last decade. An estimated 24 million people worldwide suffered from this disease by 2011. In 2016 an estimated 40 million were diagnosed with AD, and for 2050 is expected to reach 131 million people affected by AD. Therefore, detecting and confirming AD at its different stages is a priority for medical practices to provide adequate and accurate treatments. Recently, Machine Learning (ML) models have been used to study AD's stages handling missing values in multiclass, focusing on the delineation of Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and normal cognitive (CN). But, to our best knowledge, robust performance information of these models and the missing data analysis has not been presented in the literature. In this paper, we propose studying the performance of five different machine learning models for AD's stages multiclass prediction in terms of accuracy, precision, and F1-score. Also, the analysis of three imputation methods to handle the missing value problem is presented. A framework that integrates ML model for AD's stages multiclass prediction is proposed, performing an average accuracy of 84%.Keywords: alzheimer's disease, missing value, machine learning, performance evaluation
Procedia PDF Downloads 2552278 The Increasing of Perception of Consumers’ Awareness about Sustainability Brands during Pandemic: A Multi Mediation Model
Authors: Silvia Platania, Martina Morando, Giuseppe Santisi
Abstract:
Introduction: In the last thirty years, there is constant talk of sustainable consumption and a "transition" of consumer lifestyles towards greater awareness of consumer choices (United Nation, 1992). The 2019 coronavirus (COVID-19) epidemic that has hit the world population since 2020 has had significant consequences in all areas of people's lives; individuals have been forced to change their behaviors, to redefine their owngoals, priorities, practices, and lifestyles, to rebuild themselves in the new situation dictated by the pandemic. Method(Participants and procedure ): The data were collected through an online survey; moreover, we used convenience sampling from the general population. The participants were 669 Italians consumers (Female= 514, 76.8%; Male=155, 23.2%) that choice sustainability brands, aged between 18 and 65 years (Mₐ𝓰ₑ = 35.45; Standard Deviation, SD = 9.51).(Measure ): The following measures were used: The Muncy–Vitell Consumer Ethics Scale; Attitude Toward Business Scale; Perceived Consumer Effectiveness Scale; Consumers Perception on Sustainable Brand Attitudes. Results: Preliminary analyses were conducted to test our model. Pearson's bivariate correlation between variables shows that all variables of our model correlate significantly and positively, PCE with CPSBA (r = .56, p <.001). Furthermore, a CFA, according to Harman's single-factor test, was used to diagnose the extent to which common-method variance was a problem. A comparison between the hypothesised model and a model with one factor (with all items loading on a unique factor) revealed that the former provided a better fit for the data in all the CFA fit measures [χ² [6, n = 669] = 7.228, p = 0.024, χ² / df = 1.20, RMSEA = 0.07 (CI = 0.051-0.067), CFI = 0.95, GFI = 0.95, SRMR = 0.04, AIC = 66.501; BIC = 132,150). Next, amulti mediation was conducted to test our hypotheses. The results show that there is a direct effect of PCE on ethical consumption behavior (β = .38) and on ATB (β = .23); furthermore, there is a direct effect on the CPSBA outcome (β = .34). In addition, there is a mediating effect by ATB (C.I. =. 022-.119, 95% interval confidence) and by CES (C.I. =. 136-.328, 95% interval confidence). Conclusion: The spread of the COVID-19 pandemic has affected consumer consumption styles and has led to an increase in online shopping and purchases of sustainable products. Several theoretical and practical considerations emerge from the results of the study.Keywords: decision making, sustainability, pandemic, multimediation model
Procedia PDF Downloads 1102277 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 1742276 Deformation Severity Prediction in Sewer Pipelines
Authors: Khalid Kaddoura, Ahmed Assad, Tarek Zayed
Abstract:
Sewer pipelines are prone to deterioration over-time. In fact, their deterioration does not follow a fixed downward pattern. This is in fact due to the defects that propagate through their service life. Sewer pipeline defects are categorized into distinct groups. However, the main two groups are the structural and operational defects. By definition, the structural defects influence the structural integrity of the sewer pipelines such as deformation, cracks, fractures, holes, etc. However, the operational defects are the ones that affect the flow of the sewer medium in the pipelines such as: roots, debris, attached deposits, infiltration, etc. Yet, the process for each defect to emerge follows a cause and effect relationship. Deformation, which is the change of the sewer pipeline geometry, is one type of an influencing defect that could be found in many sewer pipelines due to many surrounding factors. This defect could lead to collapse if the percentage exceeds 15%. Therefore, it is essential to predict the deformation percentage before confronting such a situation. Accordingly, this study will predict the percentage of the deformation defect in sewer pipelines adopting the multiple regression analysis. Several factors will be considered in establishing the model, which are expected to influence the defamation defect severity. Besides, this study will construct a time-based curve to understand how the defect would evolve overtime. Thus, this study is expected to be an asset for decision-makers as it will provide informative conclusions about the deformation defect severity. As a result, inspections will be minimized and so the budgets.Keywords: deformation, prediction, regression analysis, sewer pipelines
Procedia PDF Downloads 1892275 Early Prediction of Cognitive Impairment in Adults Aged 20 Years and Older using Machine Learning and Biomarkers of Heavy Metal Exposure
Authors: Ali Nabavi, Farimah Safari, Mohammad Kashkooli, Sara Sadat Nabavizadeh, Hossein Molavi Vardanjani
Abstract:
Cognitive impairment presents a significant and increasing health concern as populations age. Environmental risk factors such as heavy metal exposure are suspected contributors, but their specific roles remain incompletely understood. Machine learning offers a promising approach to integrate multi-factorial data and improve the prediction of cognitive outcomes. This study aimed to develop and validate machine learning models to predict early risk of cognitive impairment by incorporating demographic, clinical, and biomarker data, including measures of heavy metal exposure. A retrospective analysis was conducted using 2011-2014 National Health and Nutrition Examination Survey (NHANES) data. The dataset included participants aged 20 years and older who underwent cognitive testing. Variables encompassed demographic information, medical history, lifestyle factors, and biomarkers such as blood and urine levels of lead, cadmium, manganese, and other metals. Machine learning algorithms were trained on 90% of the data and evaluated on the remaining 10%, with performance assessed through metrics such as accuracy, area under curve (AUC), and sensitivity. Analysis included 2,933 participants. The stacking ensemble model demonstrated the highest predictive performance, achieving an AUC of 0.778 and a sensitivity of 0.879 on the test dataset. Key predictors included age, gender, hypertension, education level, urinary cadmium, and blood manganese levels. The findings indicate that machine learning can effectively predict the risk of cognitive impairment using a comprehensive set of clinical and environmental exposure data. Incorporating biomarkers of heavy metal exposure improved prediction accuracy and highlighted the role of environmental factors in cognitive decline. Further prospective studies are recommended to validate the models and assess their utility over time.Keywords: cognitive impairment, heavy metal exposure, predictive models, aging
Procedia PDF Downloads 42274 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 1822273 Thermal and Starvation Effects on Lubricated Elliptical Contacts at High Rolling/Sliding Speeds
Authors: Vinod Kumar, Surjit Angra
Abstract:
The objective of this theoretical study is to develop simple design formulas for the prediction of minimum film thickness and maximum mean film temperature rise in lightly loaded high-speed rolling/sliding lubricated elliptical contacts incorporating starvation effect. Herein, the reported numerical analysis focuses on thermoelastohydrodynamically lubricated rolling/sliding elliptical contacts, considering the Newtonian rheology of lubricant for wide range of operating parameters, namely load characterized by Hertzian pressure (PH = 0.01 GPa to 0.10 GPa), rolling speed (>10 m/s), slip parameter (S varies up to 1.0), and ellipticity ratio (k = 1 to 5). Starvation is simulated by systematically reducing the inlet supply. This analysis reveals that influences of load, rolling speed, and level of starvation are significant on the minimum film thickness. However, the maximum mean film temperature rise is strongly influenced by slip in addition to load, rolling speed, and level of starvation. In the presence of starvation, reduction in minimum film thickness and increase in maximum mean film temperature are observed. Based on the results of this study, empirical relations are developed for the prediction of dimensionless minimum film thickness and dimensionless maximum mean film temperature rise at the contacts in terms of various operating parameters.Keywords: starvation, lubrication, elliptical contact, traction, minimum film thickness
Procedia PDF Downloads 3922272 An Experimental Study on Heat and Flow Characteristics of Water Flow in Microtube
Authors: Zeynep Küçükakça, Nezaket Parlak, Mesut Gür, Tahsin Engin, Hasan Küçük
Abstract:
In the current research, the single phase fluid flow and heat transfer characteristics are experimentally investigated. The experiments are conducted to cover transition zone for the Reynolds numbers ranging from 100 to 4800 by fused silica and stainless steel microtubes having diameters of 103-180 µm. The applicability of the Logarithmic Mean Temperature Difference (LMTD) method is revealed and an experimental method is developed to calculate the heat transfer coefficient. Heat transfer is supplied by a water jacket surrounding the microtubes and heat transfer coefficients are obtained by LMTD method. The results are compared with data obtained by the correlations available in the literature in the study. The experimental results indicate that the Nusselt numbers of microtube flows do not accord with the conventional results when the Reynolds number is lower than 1000. After that, the Nusselt number approaches the conventional theory prediction. Moreover, the scaling effects in micro scale such as axial conduction, viscous heating and entrance effects are discussed. On the aspect of fluid characteristics, the friction factor is well predicted with conventional theory and the conventional friction prediction is valid for water flow through microtube with a relative surface roughness less than about 4 %.Keywords: microtube, laminar flow, friction factor, heat transfer, LMTD method
Procedia PDF Downloads 4602271 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks
Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian
Abstract:
Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.Keywords: artificial neural network, clayey soil, imperialist competition algorithm, lateral bearing capacity, short pile
Procedia PDF Downloads 1532270 Hypocalcaemia Inducing Heart Failure: A Rare Presentation
Authors: A. Kherraf, M. Bouziane, L. Azzouzi, R. Habbal
Abstract:
Introduction: Hypocalcaemia is a rare cause of heart failure. We report the clinical case of a young patient with reversible dilated cardiomyopathy secondary to hypocalcaemia in the context of hyperparathyroidism. Clinical case: We report the clinical case of a 23-year-old patient with a history of thyroidectomy for papillary thyroid carcinoma 3 years previously, who presented to the emergency room with a progressive onset dyspnea and edema of the lower limbs. Clinical examination showed hypotension at 90/70 mmHg, tachycardia at 102 bpm, and edema of the lower limbs. The ECG showed a regular sinus rhythm with a prolonged corrected QT interval to 520ms. The chest x-ray showed cardiomegaly. Echocardiography revealed dilated cardiomyopathy with biventricular dysfunction and a left ventricular ejection fraction of 45%, as well as moderate mitral insufficiency by restriction of the posterior mitral leaflet, moderate tricuspid insufficiency, and a dilated inferior vena cava with a pulmonary arterial pressure estimated at 46 mmHg. Blood tests revealed severe hypocalcemia at 38 mg / l with normal albumin and thyroxine levels, as well as hyperphosphatemia and increased TSH. The patient received calcium intake and vitamin D supplementation and was treated with beta blockers, ACE inhibitors, and diuretics with good progress and progressive normalization of cardiac function. Discussion: The cardiovascular manifestations of hypocalcaemia usually appear with deeply low serum calcium levels. This can lead to hypotension, arrhythmias, ventricular fibrillation, prolonged QT interval, or even heart failure. Heart failure is a rare and serious complication of hypocalcemia but most often characterized by complete normalization of myocardial function after treatment. The etiology of the hypocalcaemia, in this case, was probably related to accidental parathyroid removal during thyroidectomy. This is why careful monitoring of calcium levels is recommended after surgery. Conclusion: Hypocalcemic heart failure is rare but reversible heart disease. Systematic monitoring of serum calcium should be performed in all patients after thyroid surgery to avoid any complications related to hypoparathyroidism.Keywords: hypocalcemia, heart failure, thyroid surgery, hypoparathyroidism
Procedia PDF Downloads 1432269 Discovering New Organic Materials through Computational Methods
Authors: Lucas Viani, Benedetta Mennucci, Soo Young Park, Johannes Gierschner
Abstract:
Organic semiconductors have attracted the attention of the scientific community in the past decades due to their unique physicochemical properties, allowing new designs and alternative device fabrication methods. Until today, organic electronic devices are largely based on conjugated polymers mainly due to their easy processability. In the recent years, due to moderate ET and CT efficiencies and the ill-defined nature of polymeric systems the focus has been shifting to small conjugated molecules with well-defined chemical structure, easier control of intermolecular packing, and enhanced CT and ET properties. It has led to the synthesis of new small molecules, followed by the growth of their crystalline structure and ultimately by the device preparation. This workflow is commonly followed without a clear knowledge of the ET and CT properties related mainly to the macroscopic systems, which may lead to financial and time losses, since not all materials will deliver the properties and efficiencies demanded by the current standards. In this work, we present a theoretical workflow designed to predict the key properties of ET of these new materials prior synthesis, thus speeding up the discovery of new promising materials. It is based on quantum mechanical, hybrid, and classical methodologies, starting from a single molecule structure, finishing with the prediction of its packing structure, and prediction of properties of interest such as static and averaged excitonic couplings, and exciton diffusion length.Keywords: organic semiconductor, organic crystals, energy transport, excitonic couplings
Procedia PDF Downloads 2532268 Optically Active Material Based on Bi₂O₃@Yb³⁺, Nd³⁺ with High Intensity of Upconversion Luminescence in Red and Green Region
Authors: D. Artamonov, A. Tsibulnikova, I. Samusev, V. Bryukhanov, A. Kozhevnikov
Abstract:
The synthesis and luminescent properties of Yb₂O₃, Nd₂O₃@Bi₂O₃ complex with upconversion generation are discussed in this work. The obtained samples were measured in the visible region of the spectrum under excitation with a wavelength of 980 nm. The studies showed that the obtained complexes have a high degree of stability and intense luminescence in the wavelength range of 400-750 nm. Consideration of the time dependence of the intensity of the upconversion luminescence allowed us to conclude that the enhancement of the intensity occurs in the time interval from 5 to 30 min, followed by the appearance of a stationary mode.Keywords: lasers, luminescence, upconversion photonics, rare earth metals
Procedia PDF Downloads 852267 Farmers Perception in Pesticide Usage in Curry Leaf (Murraya koeinigii (L.))
Authors: Swarupa Shashi Senivarapu Vemuri
Abstract:
Curry leaf (Murraya koeinigii (L.)) exported from India had insecticide residues above maximum residue limits, which are hazardous to consumer health and caused rejection of the commodity at the point of entry in Europe and middle east resulting in a check on export of curry leaf. Hence to study current pesticide usage patterns in major curry leaf growing areas, a survey on pesticide use pattern was carried out in curry leaf growing areas in Guntur districts of Andhra Pradesh during 2014-15, by interviewing farmers growing curry leaf utilizing the questionnaire to assess their knowledge and practices on crop cultivation, general awareness on pesticide recommendations and use. Education levels of farmers are less, where 13.96 per cent were only high school educated, and 13.96% were illiterates. 18.60% farmers were found cultivating curry leaf crop in less than 1 acre of land, 32.56% in 2-5 acres, 20.93% in 5-10 acres and 27.91% of the farmers in more than 10 acres of land. Majority of the curry leaf farmers (93.03%) used pesticide mixtures rather than applying single pesticide at a time, basically to save time, labour, money and to combat two or more pests with single spray. About 53.48% of farmers applied pesticides at 2 days interval followed by 34.89% of the farmers at 4 days interval, and about 11.63% of the farmers sprayed at weekly intervals. Only 27.91% of farmers thought that the quantity of pesticides used at their farm is adequate, 90.69% of farmers had perception that pesticides are helpful in getting good returns. 83.72% of farmers felt that crop change is the only way to control sucking pests which damages whole crop. About 4.65% of the curry leaf farmers opined that integrated pest management practices are alternative to pesticides and only 11.63% of farmers felt natural control as an alternative to pesticides. About 65.12% of farmers had perception that high pesticide dose will give higher yields. However, in general, Curry leaf farmers preferred to contact pesticide dealers (100%) and were not interested in contacting either agricultural officer or a scientist. Farmers were aware of endosulfan ban 93.04%), in contrast, only 65.12, per cent of farmers knew about the ban of monocrotophos on vegetables. Very few farmers knew about pesticide residues and decontamination by washing. Extension educational interventions are necessary to produce fresh curry leaf free from pesticide residues.Keywords: Curry leaf, decontamination, endosulfan, leaf roller, psyllids, tetranychid mite
Procedia PDF Downloads 335