Search results for: shape prediction
3306 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 763305 Prediction of Super-Response to Cardiac Resynchronisation Therapy
Authors: Vadim A. Kuznetsov, Anna M. Soldatova, Tatyana N. Enina, Elena A. Gorbatenko, Dmitrii V. Krinochkin
Abstract:
The aim of the study was to evaluate potential parameters related with super-response to CRT. Methods: 60 CRT patients (mean age 54.3 ± 9.8 years; 80% men) with congestive heart failure (CHF) II-IV NYHA functional class, left ventricular ejection fraction < 35% were enrolled. At baseline, 1 month, 3 months and each 6 months after implantation clinical, electrocardiographic and echocardiographic parameters, NT-proBNP level were evaluated. According to the best decrease of left ventricular end-systolic volume (LVESV) (mean follow-up period 33.7 ± 15.1 months) patients were classified as super-responders (SR) (n=28; reduction in LVESV ≥ 30%) and non-SR (n=32; reduction in LVESV < 30%). Results: At baseline groups differed in age (58.1 ± 5.8 years in SR vs 50.8 ± 11.4 years in non-SR; p=0.003), gender (female gender 32.1% vs 9.4% respectively; p=0.028), width of QRS complex (157.6 ± 40.6 ms in SR vs 137.6 ± 33.9 ms in non-SR; p=0.044). Percentage of LBBB was equal between groups (75% in SR vs 59.4% in non-SR; p=0.274). All parameters of mechanical dyssynchrony were higher in SR, but only difference in left ventricular pre-ejection period (LVPEP) was statistically significant (153.0 ± 35.9 ms vs. 129.3 ± 28.7 ms p=0.032). NT-proBNP level was lower in SR (1581 ± 1369 pg/ml vs 3024 ± 2431 pg/ml; p=0.006). The survival rates were 100% in SR and 90.6% in non-SR (log-rank test P=0.002). Multiple logistic regression analysis showed that LVPEP (HR 1.024; 95% CI 1.004–1.044; P = 0.017), baseline NT-proBNP level (HR 0.628; 95% CI 0.414–0.953; P=0.029) and age at baseline (HR 1.094; 95% CI 1.009-1.168; P=0.30) were independent predictors for CRT super-response. ROC curve analysis demonstrated sensitivity 71.9% and specificity 82.1% (AUC=0.827; p < 0.001) of this model in prediction of super-response to CRT. Conclusion: Super-response to CRT is associated with better survival in long-term period. Presence of LBBB was not associated with super-response. LVPEP, NT-proBNP level, and age at baseline can be used as independent predictors of CRT super-response.Keywords: cardiac resynchronisation therapy, superresponse, congestive heart failure, left bundle branch block
Procedia PDF Downloads 3993304 Microencapsulation of Phenobarbital by Ethyl Cellulose Matrix
Authors: S. Bouameur, S. Chirani
Abstract:
The aim of this study was to evaluate the potential use of EthylCellulose in the preparation of microspheres as a Drug Delivery System for sustained release of phenobarbital. The microspheres were prepared by solvent evaporation technique using ethylcellulose as polymer matrix with a ratio 1:2, dichloromethane as solvent and Polyvinyl alcohol 1% as processing medium to solidify the microspheres. Size, shape, drug loading capacity and entrapement efficiency were studied.Keywords: phenobarbital, microspheres, ethylcellulose, polyvinylacohol
Procedia PDF Downloads 3613303 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 923302 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 2093301 Transport Medium That Prevents the Conversion of Helicobacter Pylori to the Coccoid Form
Authors: Eldar Mammadov, Konul Mammadova, Aytaj Ilyaszada
Abstract:
Background: According to many studies, it is known that H. pylori transform into the coccoid form, which cannot be cultured and has poor metabolic activity.In this study, we succeeded in preserving the spiral shape of H.pylori for a long time by preparing a biphase transport medium with a hard bottom (Muller Hinton with 7% HRBC (horse red blood cells) agar 5ml) and liquid top part (BH (brain heart) broth + HS (horse serum)+7% HRBC+antibiotics (Vancomycin 5 mg, Trimethoprim lactate 25 mg, Polymyxin B 1250 I.U.)) in cell culture flasks with filter caps. For comparison, we also used a BH broth medium with 7% HRBC used for the transport of H.pylori. Methods: Rapid urease test positive 7 biopsy specimens were also inoculated into biphasic and BH broth medium with 7% HRBC, then put in CO2 Gaspak packages and sent to the laboratory. Then both mediums were kept in the thermostat at 37 °C for 1 day. After microscopic, PCR and urease test diagnosis, they were transferred to Columbia Agar with 7% HRBC. Incubated at 37°C for 5-7 days, cultures were examined for colony characteristics and bacterial morphology. E-test antimicrobial susceptibility test was performed. Results: There were 3 growths from biphasic transport medium passed to Columbia agar with 7% HRBC and only 1 growth from BH broth medium with 7% HRBC. It was also observed that after the first 3 days in BH broth medium with 7%, H.pylori passed into coccoid form and its biochemical activity weakened, while its spiral shape did not change for 2-3 weeks in the biphase transport medium. Conclusions: By using the biphase transport medium we have prepared; we can culture the bacterium by preventing H.pylori from spiraling into the coccoid form. In our opinion, this may result in the wide use of culture method for diagnosis of H.pylori, study of antibiotic susceptibility and molecular genetic analysis.Keywords: clinical trial, H.pylori, coccoid form, transport medium
Procedia PDF Downloads 733300 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: political tendency, prediction, sentiment analysis, Twitter
Procedia PDF Downloads 2383299 Predicting High-Risk Endometrioid Endometrial Carcinomas Using Protein Markers
Authors: Yuexin Liu, Gordon B. Mills, Russell R. Broaddus, John N. Weinstein
Abstract:
The lethality of endometrioid endometrial cancer (EEC) is primarily attributable to the high-stage diseases. However, there are no available biomarkers that predict EEC patient staging at the time of diagnosis. We aim to develop a predictive scheme to help in this regards. Using reverse-phase protein array expression profiles for 210 EEC cases from The Cancer Genome Atlas (TCGA), we constructed a Protein Scoring of EEC Staging (PSES) scheme for surgical stage prediction. We validated and evaluated its diagnostic potential in an independent cohort of 184 EEC cases obtained at MD Anderson Cancer Center (MDACC) using receiver operating characteristic curve analyses. Kaplan-Meier survival analysis was used to examine the association of PSES score with patient outcome, and Ingenuity pathway analysis was used to identify relevant signaling pathways. Two-sided statistical tests were used. PSES robustly distinguished high- from low-stage tumors in the TCGA cohort (area under the ROC curve [AUC]=0.74; 95% confidence interval [CI], 0.68 to 0.82) and in the validation cohort (AUC=0.67; 95% CI, 0.58 to 0.76). Even among grade 1 or 2 tumors, PSES was significantly higher in high- than in low-stage tumors in both the TCGA (P = 0.005) and MDACC (P = 0.006) cohorts. Patients with positive PSES score had significantly shorter progression-free survival than those with negative PSES in the TCGA (hazard ratio [HR], 2.033; 95% CI, 1.031 to 3.809; P = 0.04) and validation (HR, 3.306; 95% CI, 1.836 to 9.436; P = 0.0007) cohorts. The ErbB signaling pathway was most significantly enriched in the PSES proteins and downregulated in high-stage tumors. PSES may provide clinically useful prediction of high-risk tumors and offer new insights into tumor biology in EEC.Keywords: endometrial carcinoma, protein, protein scoring of EEC staging (PSES), stage
Procedia PDF Downloads 2203298 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 2183297 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.Keywords: deep learning, data mining, gender predication, MOOCs
Procedia PDF Downloads 1483296 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 4443295 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 2583294 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 2383293 Role of Micro-Patterning on Stem Cell-Material Interaction Modulation and Cell Fate
Authors: Lay Poh Tan, Chor Yong Tay, Haiyang Yu
Abstract:
Micro-contact printing is a form of soft lithography that uses the relief patterns on a master polydimethylsiloxane (PDMS) stamp to form patterns of self-assembled monolayers (SAMs) of ink on the surface of a substrate through conformal contact technique. Here, we adopt this method to print proteins of different dimensions on our biodegradable polymer substrates. We started off with printing 20-500 μm scale lanes of fibronectin to engineer the shape of bone marrow derived human mesenchymal stem cell (hMSCs). After 8 hours of culture, the hMSCs adopted elongated shapes, and upon analysis of the gene expressions, genes commonly associated with myogenesis (GATA-4, MyoD1, cTnT and β-MHC) and neurogenesis (NeuroD, Nestin, GFAP, and MAP2) were up-regulated but gene expression associated to osteogenesis (ALPL, RUNX2, and SPARC) were either down modulated or remained at the nominal level. This is the first evidence that cellular morphology control via micropatterning could be used to modulate stem cell fate without external biochemical stimuli. We further our studies to modulate the focal adhesion (FA) instead of the macro shape of cells. Micro-contact printed islands of different smaller dimensions were investigated. We successfully regulated the FAs into dense FAs and elongated FAs by micropatterning. Additionally, the combined effects of hard (40.4 kPa), and intermediate (10.6 kPa) PA gel and FAs patterning on hMSCs differentiation were studied. Results showed that FA and matrix compliance plays an important role in hMSCs differentiation, and there is a cross-talk between different physical stimulants and the significance of these stimuli can only be realized if they are combined at the optimum level.Keywords: micro-contact printing, polymer substrate, cell-material interaction, stem cell differentiation
Procedia PDF Downloads 1723292 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach
Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton
Abstract:
Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.Keywords: competition, growth, model, thinning
Procedia PDF Downloads 1283291 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 3523290 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 2343289 Real-Time Radar Tracking Based on Nonlinear Kalman Filter
Authors: Milca F. Coelho, K. Bousson, Kawser Ahmed
Abstract:
To accurately track an aerospace vehicle in a time-critical situation and in a highly nonlinear environment, is one of the strongest interests within the aerospace community. The tracking is achieved by estimating accurately the state of a moving target, which is composed of a set of variables that can provide a complete status of the system at a given time. One of the main ingredients for a good estimation performance is the use of efficient estimation algorithms. A well-known framework is the Kalman filtering methods, designed for prediction and estimation problems. The success of the Kalman Filter (KF) in engineering applications is mostly due to the Extended Kalman Filter (EKF), which is based on local linearization. Besides its popularity, the EKF presents several limitations. To address these limitations and as a possible solution to tracking problems, this paper proposes the use of the Ensemble Kalman Filter (EnKF). Although the EnKF is being extensively used in the context of weather forecasting and it is being recognized for producing accurate and computationally effective estimation on systems with a very high dimension, it is almost unknown by the tracking community. The EnKF was initially proposed as an attempt to improve the error covariance calculation, which on the classic Kalman Filter is difficult to implement. Also, in the EnKF method the prediction and analysis error covariances have ensemble representations. These ensembles have sizes which limit the number of degrees of freedom, in a way that the filter error covariance calculations are a lot more practical for modest ensemble sizes. In this paper, a realistic simulation of a radar tracking was performed, where the EnKF was applied and compared with the Extended Kalman Filter. The results suggested that the EnKF is a promising tool for tracking applications, offering more advantages in terms of performance.Keywords: Kalman filter, nonlinear state estimation, optimal tracking, stochastic environment
Procedia PDF Downloads 1473288 Effect of Dimensional Reinforcement Probability on Discrimination of Visual Compound Stimuli by Pigeons
Authors: O. V. Vyazovska
Abstract:
Behavioral efficiency is one of the main principles to be successful in nature. Accuracy of visual discrimination is determined by the attention, learning experience, and memory. In the experimental condition, pigeons’ responses to visual stimuli presented on the screen of the monitor are behaviorally manifested by pecking or not pecking the stimulus, by the number of pecking, reaction time, etc. The higher the probability of rewarding is, the more likely pigeons will respond to the stimulus. We trained 8 pigeons (Columba livia) on a stagewise go/no-go visual discrimination task.16 visual stimuli were created from all possible combinations of four binary dimensions: brightness (dark/bright), size (large/small), line orientation (vertical/horizontal), and shape (circle/square). In the first stage, we presented S+ and 4 S-stimuli: the first that differed in all 4-dimensional values from S+, the second with brightness dimension sharing with S+, the third sharing brightness and orientation with S+, the fourth sharing brightness, orientation and size. Then all 16 stimuli were added. Pigeons rejected correctly 6-8 of 11 new added S-stimuli at the beginning of the second stage. The results revealed that pigeons’ behavior at the beginning of the second stage was controlled by probabilities of rewarding for 4 dimensions learned in the first stage. More or fewer mistakes with dimension discrimination at the beginning of the second stage depended on the number S- stimuli sharing the dimension with S+ in the first stage. A significant inverse correlation between the number of S- stimuli sharing dimension values with S+ in the first stage and the dimensional learning rate at the beginning of the second stage was found. Pigeons were more confident in discrimination of shape and size dimensions. They made mistakes at the beginning of the second stage, which were not associated with these dimensions. Thus, the received results help elucidate the principles of dimensional stimulus control during learning compound multidimensional visual stimuli.Keywords: visual go/no go discrimination, selective attention, dimensional stimulus control, pigeon
Procedia PDF Downloads 1413287 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 863286 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China
Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini
Abstract:
Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.Keywords: healthcare, patient engagement, influencing factor, the mechanism
Procedia PDF Downloads 1563285 The Effect of Wet Cooling Pad Thickness and Geometric Configuration to Enhance Evaporative Cooler Saturation Efficiency: A Review
Authors: Biruk Abate
Abstract:
Evaporative cooling occurs when air with high temperature and reduced humidity passes over a wet porous surface and a higher degree of cooling process is achieved for storage of fruits and vegetables due to greater rate of evaporation. The main objective of this reviewed study is to understand the effect of evaporative surface pad thickness and geometric configuration on the saturation efficiency of evaporative cooler and to state some related factors affecting the performance of the system. From this overview, selection of pad thickness and geometrical shape with suitable characteristics of heat and mass transfer and water holding capacity of the pads was reviewed as these parameters are important for saturation efficiency of evaporative cooling. Increasing the cooling pad thickness through increasing the face velocity increases the effectiveness of wet-bulb saturation. Increasing ambient temperature, inlet air speed and ambient air humidity decreases the wet bulb effectiveness and it increases with increasing length of the pad. Increasing the ambient temperature and inlet air velocity decreases the humidity ratio, but increases with increasing ambient air humidity and lengths of the pad. Increasing the temperature-humidity index is possible with increasing ambient temperature, inlet air velocity, ambient air humidity and pad length. Generally, all materials having a higher wetted surface area per unit volume give higher efficiency. Materials with higher thickness increase the wetted surface area for better mix-up of air and water to give higher efficiency for the same shape and this in turn helps to store fruits and vegetables.Keywords: Degree of cooling, heat and mass transfer, evaporative cooling, porous surface
Procedia PDF Downloads 1303284 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 463283 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 1393282 Low Temperature PVP Capping Agent Synthesis of ZnO Nanoparticles by a Simple Chemical Precipitation Method and Their Properties
Authors: V. P. Muhamed Shajudheen, K. Viswanathan, K. Anitha Rani, A. Uma Maheswari, S. Saravana Kumar
Abstract:
We are reporting a simple and low-cost chemical precipitation method adopted to prepare zinc oxide nanoparticles (ZnO) using polyvinyl pyrrolidone (PVP) as a capping agent. The Differential Scanning Calorimetry (DSC) and Thermo Gravimetric Analysis (TGA) was applied on the dried gel sample to record the phase transformation temperature of zinc hydroxide Zn(OH)2 to zinc oxide (ZnO) to obtain the annealing temperature of 800C. The thermal, structure, morphology and optical properties have been employed by different techniques such as DSC-TGA, X-Ray Diffraction (XRD), Fourier Transform Infra-Red spectroscopy (FTIR), Micro Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence spectroscopy (PL) and Field Effect Scanning Electron Microscopy (FESEM). X-ray diffraction results confirmed the wurtzite hexagonal structure of ZnO nanoparticles. The two intensive peaks at 160 and 432 cm-1 in the Raman Spectrum are mainly attributed to the first order modes of the wurtzite ZnO nanoparticles. The energy band gap obtained from the UV-Vis absorption spectra, shows a blue shift, which is attributed to increase in carrier concentration (Burstein Moss Effect). Photoluminescence studies of the single crystalline ZnO nanoparticles, show a strong peak centered at 385 nm, corresponding to the near band edge emission in ultraviolet range. The mixed shape of grapes, sphere, hexagonal and rock like structure has been noticed in FESEM. The results showed that PVP is a suitable capping agent for the preparation of ZnO nanoparticles by simple chemical precipitation method.Keywords: ZnO nanoparticles, simple chemical precipitation route, mixed shape morphology, UV-visible absorption, photoluminescence, Fourier transform infra-Red spectroscopy
Procedia PDF Downloads 4433281 Solubility and Dissolution Enhancement of Poorly Soluble Drugs Using Biosericin
Authors: Namdeo Jadhav, Nitin Salunkhe
Abstract:
Currently, sericin is being treated as waste of sericulture industry, especially at reeling process. Looking at prospective physicochemical properties, an attempt has been made to explore pharmaceutical applications of sericin waste in fabrication of medicated solid dispersions. Solid dispersions (SDs) of poorly soluble drugs (Lornoxicam, Meloxicam & Felodipine) were prepared by spray drying, solvent evaporation, ball milling and physical kneading in mass ratio of drug: sericin (1:0.5, 1:1, 1:1.5, 1:2, 1:2.5 and 1:3 w/w) and were investigated by solubility, ATR-FTIR, XRD and DSC, micromeritics and tablettability, surface morphology and in-vitro dissolution. It has been observed that sericin improves solubility of drugs by 8 to 10 times compared to pure drugs. The presence of hydrogen bonding between drugs and sericin was confirmed from the ATR-FTIR spectra. Amongst these methods, spray dried (1:2 w/w) SDs showed fully amorphous state representing molecularly distributed drug as confirmed from XRD and DSC study. Spray dried meloxicam SDs showed better compressibility and compactibility. The microphotograph of spray dried batches of lornoxicam (SDLX) and meloxicam SDs (SDMX) showed bowl shaped, and bowl plus spherical particles respectively, while spray dried felodipine SDs (SDFL) showed spherical shape. The SDLX, SDMX and SDFL (1:2 w/w) displayed better dissolution performance than other methods. Conclusively, hydrophilic matrix of sericin can be used to deliver poor water soluble drugs and its aerodynamic shape may show a great potential for various drug deliveries. If established as pharmaceutical excipient, sericin holds a potential to revolutionise economics of pharmaceutical industry, and sericulture farming, especially of Asian countries.Keywords: biosericin, poorly soluble drugs, solid dispersion, solubility and dissolution improvement
Procedia PDF Downloads 2563280 A Study of Lapohan Traditional Pottery Making in Selakan Island, Semporna Sabah: An Initial Framework
Authors: Norhayati Ayob, Shamsu Mohamad
Abstract:
This paper aims to provide an initial background of the process of making traditional ceramic pottery, focusing on the materials and the influence of culture heritage. Ceramic pottery is one of the hallmarks of Sabah’s heirloom, not only use as cooking and storage containers but also closely linked with folk cultures and heritage. The Bajau Laut ethnic community of Semporna or better known as the Sea Gypsies, mostly are boat dwellers and work as fishermen in the coast. This ethnic community is famous for their own artistic traditional heirloom, especially the traditional hand-made clay stove called Lapohan. It is found that in the daily life of Bajau Laut community, Lapohan (clay stove) is used to prepare the meal and as a food warmer while they are at the sea. Besides, Lapohan pottery conveys symbolic meaning of natural objects, which portrays the identity, and values of Bajau Laut community. It is acknowledged that the basic process of making potterywares was much the same for people all across the world, nevertheless, it is crucial to consider that different ethnic groups may have their own styles and choices of raw materials. Furthermore, it is still unknown why and how the Bajau Laut ethnic of Semporna get started making their own pottery and to survive until today by heavily depending on the raw materials available in Semporna. In addition, the emergent problem faced by the pottery maker in Sabah is the absence of young successor to continue the heirloom legacy. Therefore, this research aims to explore the traditional pottery making in Sabah, by investigating the background history of Lapohan pottery and to propose the classification of Lapohan based on design and motifs of traditional pottery that will be recognised throughout the study. It is postulated that different techniques and forms of making traditional pottery may produce different types of pottery in terms of surface decoration, shape, and size that portrays different cultures. This study will be conducted at Selakan Island, Semporna, which is the only location that still has Lapohan making. This study is also based on the chronological process of making pottery and taboos of the process of preparing the clay, forming, decoration technique, motif application and firing techniques. The relevant information for the study will be gathered from field study, including observation, in-depth interview and video recording. In-depth interviews will be conducted with several potters and the conversation and pottery making process will be recorded in order to understand the actual process of making Lapohan. The findings hope to provide several types of Lapohan based on different designs and cultures, for example, the one with flat-shape design or has round-shape on the top of clay stove will be labeled with suitable name based on their culture. In conclusion, it is hoped that this study will contribute to conservation for traditional pottery making in Sabah as well as to preserve their culture and heirloom for future generations.Keywords: Bajau Laut, culture, Lapohan, traditional pottery
Procedia PDF Downloads 1883279 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model
Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu
Abstract:
The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model
Procedia PDF Downloads 3183278 Numerical Analysis of a Strainer Using Porous Media Technique
Authors: Ji-Hoon Byeon, Kwon-Hee Lee
Abstract:
Strainer filter serves to block the inflow of impurities while mixed fluid is entering or exiting the piping. The filter of the strainer has a perforated structure, so that the pressure drop and the velocity change necessarily occur when the mixed fluid passes through the filter. It is possible to predict the pressure drop and velocity change of the strainer by numerical analysis by implementing all the perforated plates. However, if the size of the perforated plate exceeds a certain size, it is difficult to perform the numerical analysis, and sometimes we cannot guarantee its accuracy. In this study, we tried to predict the pressure drop and velocity change by using the porous media technique to obtain the equivalent resistance without actual implementation of the perforation shape of the strainer. Ansys-CFX, a commercial software, is used to perform the numerical analysis. The analysis procedure is as follows. Firstly, the unit pattern of the perforated plate is modeled, and the pressure drop is analyzed by varying the velocity by symmetry of the wall surface. Secondly, since the equation for obtaining resistance is a quadratic equation of pressure having unknown velocity, the viscous resistance and the inertia resistance of the perforated plate are obtained from the relationship between pressure and speed. Thirdly, by using the calculated resistance values, the values are substituted into the flat plate implemented as a two-dimensional porous media, and the accuracy is verified by comparing the pressure drop and the velocity change. Fourthly, the pressure drop and velocity change in the whole strainer are analyzed by using the resistance values obtained on the perforated plate in the actual whole strainer model. Using the porous media technique, it is found that pressure drop and velocity change can be predicted in relatively short time without modeling the overall shape of the filter. Acknowledgements: This work was supported by the Valve Center from the Regional Innovation Center(RIC) Program of Ministry of Trade, Industry & Energy (MOTIE).Keywords: strainer, porous media, CFD, numerical analysis
Procedia PDF Downloads 3713277 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 122