Search results for: accuracy measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6080

Search results for: accuracy measurements

260 A 'Systematic Literature Review' of Specific Types of Inventory Faced by the Management of Firms

Authors: Rui Brito

Abstract:

This contribution regards a literature review of inventory management that is a relevant topic for the firms, due to its important use of capital with implications in firm’s profitability within the complexity of a more competitive and globalized world. Firms look for small inventories in order to reduce holding costs, namely opportunity cost, warehousing and handling costs, deterioration and being out of style, but larger inventories are required by some reasons, such as customer service, ordering cost, transportation cost, supplier’s payment to reduce unit costs or to take advantage of price increase in the near future, and equipment setup cost. Thus, management shall address a trade-off between small inventories and larger inventories. This literature review concerns three types of inventory (spare parts, safety stock, and vendor) whose management usually is beyond the scope of logistics. The applied methodology consisted of an online search of databases regarding scientific documents in English, namely Elsevier, Springer, Emerald, Wiley, and Taylor & Francis, but excluding books except if edited, using search engines, such as Google Scholar and B-on. The search was based on three keywords/strings (themes) which had to be included just as in the article title, suggesting themes were very relevant to the researchers. The whole search period was between 2009 and 2018 with the aim of collecting between twenty and forty studies considered relevant within each of the key words/strings specified. Documents were sorted by relevance and to prevent the exclusion of the more recent articles, based on lower quantity of citations partially due to less time to be cited in new research articles, the search period was divided into two sub-periods (2009-2015 and 2016-2018). The number of surveyed articles by theme showed a variation from 40 to 200 and the number of citations of those articles showed a wider variation from 3 to 216. Selected articles from the three themes were analyzed and the first seven of the first sub-period and the first three of the second sub-period with more citations were read in full to make a synopsis of each article. Overall, the findings show that the majority of article types were models, namely mathematical, although with different sub-types for each theme. Almost all articles suggest further studies, with some mentioning it for their own author(s), which widen the diversity of the previous research. Identified research gaps concern the use of surveys to know which are the models more used by firms, the reasons for not using the models with more performance and accuracy, and which are the satisfaction levels with the outcomes of the inventories management and its effect on the improvement of the firm’s overall performance. The review ends with the limitations and contributions of the study.

Keywords: inventory management, safety stock, spare parts inventory, vendor managed inventory

Procedia PDF Downloads 74
259 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 147
258 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 204
257 Cross-Sectional Associations between Deprivation Status and Physical Activity, Dietary Behaviours, Health-Related Variables, and Health-Related Quality of Life among Children Aged 9-11 Years

Authors: Maria Cardova

Abstract:

Aim and objectives: The purpose of this studywas to explore to what extent the deprivation statusinfluenced children’s physical activity, dietary behaviour, and health outcomes such as weight status. Background: The United Kingdom’s childhood obesity rates are currently ranked among the highest in Europe. North West England deals with highest rates of childhood obesity. Data from the UK Millennium Cohort Study suggested a deprivation gradient to childhood obesity in England, with obesity rates being the highest in the most deprived areas. Traditionally, it has been individual conception of health, but the contemporary stance is that health behaviours affecting obesity are influenced by a broad range of factors operating at multiple levels. According to socio-ecological model of health behaviour, differences in obesity rates and health outcomes are likely explained by differences in lifestyle behaviours including physical activity and diet behaviours. However, higher rates of obesity among deprived children are not due to physical inactivity, in fact, most socially disadvantaged children are the most physically active. Poor diet including high consumption of fast food and sugar-sweetened beverages and low consumption of fruit and vegetables was found to be the most prevalent among adolescents living in poverty. Methods: This study adopted quantitative approach. It consisted of combination of basic demographic data, anthropometry, and questionnaires. Children (N = 165, mean age = 10.04 years; 53.33% female; 46.66% male) completed survey packs during school day including KIDSCREEN, Youth Activity Profile, Beverage and Snack Questionnaire, and Child Body Image Scale questionnaires as well as had anthropometric measurements taken including Body mass index, waist circumference, weight, and height. Children’s deprivation status was based on the English Indices of Multiple Deprivation scores of the school they attended. Results: Children from more deprived areas had higher weight status, waist circumference. Deprivation status had also effect on some dimensions of the KIDSCREEN questionnaire, such as that those from more deprived areas felt less socially accepted and bullied by their peers, had worse feelings about themselves such as body image, and more difficulty with school and learning. Children from more deprived areas reported higher rates of physical activity and also differences in snack and fruit and vegetable intake than their more affluent peers. Conclusion: Results demonstrated that, children living in the most-deprived areas appear to be at greater risk of unfavourable health-related variables and behaviours and are exposed to home and neighbourhood environments that are less conducive to health-promoting behaviours compared to their peers from less deprived areas. These findings indicate that children living in highly deprived areas represent an important group for future interventions designed to promote health-behaviours that would impact on the quality of life of the child and other health variables such as weight status.

Keywords: children, dietary behaviour, health, obesity, physical Activity, weight Status

Procedia PDF Downloads 99
256 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 82
255 Decreasing the Oxidative Stress in Autistic Children: A Randomized Double-Blind Controlled Study With Palm Dates Fruit

Authors: Ammal Mokhtar Metwally, Amal Elsaied, Ghada A. Abdel-Latef, Ebtissam M. Salah El-Din, Hanaa R. M. Attia

Abstract:

The link between various diet therapies and autism is controversial and limited. Nutritional interventions aim to increase antioxidant levels suggesting a positive effect on the improvement of autism severity. In this study, the effectiveness of a 90-day Dates fruits consumption fruits (a non-pharmacological and risk-free option) on alleviating autism severity symptoms in individuals with ASD was investigated. The study examined also whether the baseline or improvements of some of the clinical and laboratory characteristics of the subjects affected their response to dates fruits intake on the severity of ASD symptoms. Methodology: This study involved a randomized controlled, double-blind 3-month dates fruits intake. 131 Egyptian children aged 3-12 years with confirmed ASD were enrolled in the study. cases were randomized in one of the three groups as follows; 1st regimen: Group I on 3 dates’ fruits/day (47 cases), 2nd regimen: Group II on 5 dates’ fruits/day (42 cases), and 3rd regimen: group III; nondates group (42 cases). ASD severity was assessed using both the Diagnostic and statistical manual of mental disorders, 5th ed. (DSM-V) criteria and the Childhood Autism Rating Scale (CARS) analysis. The following measures were assessed before and after the regimens: blood levels of three oxidative markers; Malondialdehyde (MDA), glutathione peroxidase (GPX1), and superoxide dismutase (SOD), nutritional, dietary assessment & anthropometric measurements Results: A significant reduction in the mean score of autism was detected based on CARS scores for those on dates’ regimens compared to those on non-dates (p < 0.01). Participants on 5 dates’ fruits/day for three months showed the highest improvement for autism severity based on both CARS and DSM5 compared to those in 3 dates’ fruits/day and non-dates groups. Responders to dates fruits intake as reflected on the Improvement of autism severity based on CARS diagnosis was detected among 78.7 % and 62.9 % based on CARS and DSM5 diagnosis, respectively. Responders had significant improvement in BMI z score and in the ratio levels of both MDA/SOD and MDA/GPX. Conclusion: The positive results of this study suggest that palm dates fruits could be recommended for children with ASD as adjuvant therapy on a daily regular basis to achieve consistent improvement of autism symptoms Objective: Investigate the effectiveness of a 90-day Dates fruits consumption fruits on alleviating autism severity symptoms in individuals with ASD and explore the clinical and laboratory characteristics of the subjects affected their response to dates fruits intake. Methodology: The study was a randomized controlled, double-blind for 3-month. 131 autistic Egyptian children aged 3-12 years were enrolled in one of the three groups; 1st: on 3 dates’ fruits/day (47 cases), 2nd: Group II on 5 dates’ fruits/day (42 cases), and 3rd: group III; nondates group (42 cases). Conclusion: The positive results of this study suggest that palm dates fruit (a non-pharmacological and risk-free option) could be recommended for children with ASD as adjuvant therapy on a daily regular basis to achieve consistent improvement of autism symptoms.

Keywords: autism spectrum disorders, palm dates fruits, CARS, DSM5, oxidative markers

Procedia PDF Downloads 61
254 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 77
253 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 96
252 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 12
251 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 93
250 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 279
249 Magnetic Carriers of Organic Selenium (IV) Compounds: Physicochemical Properties and Possible Applications in Anticancer Therapy

Authors: E. Mosiniewicz-Szablewska, P. Suchocki, P. C. Morais

Abstract:

Despite the significant progress in cancer treatment, there is a need to search for new therapeutic methods in order to minimize side effects. Chemotherapy, the main current method of treating cancer, is non-selective and has a number of limitations. Toxicity to healthy cells is undoubtedly the biggest problem limiting the use of many anticancer drugs. The problem of how to kill cancer without harming a patient can be solved by using organic selenium (IV) compounds. Organic selenium (IV) compounds are a new class of materials showing a strong anticancer activity. They are first organic compounds containing selenium at the +4 oxidation level and therefore they eliminate the multidrug-resistance for all tumor cell lines tested so far. These materials are capable of selectively killing cancer cells without damaging the healthy ones. They are obtained by the incorporation of selenous acid (H2SeO3) into molecules of fatty acids of sunflower oil and therefore, they are inexpensive to manufacture. Attaching these compounds to magnetic carriers enables their precise delivery directly to the tumor area and the simultaneous application of the magnetic hyperthermia, thus creating a huge opportunity to effectively get rid of the tumor without any side effects. Polylactic-co-glicolic acid (PLGA) nanocapsules loaded with maghemite (-Fe2O3) nanoparticles and organic selenium (IV) compounds are successfully prepared by nanoprecipitation method. In vitro antitumor activity of the nanocapsules were evidenced using murine melanoma (B16-F10), oral squamos carcinoma (OSCC) and murine (4T1) and human (MCF-7) breast lines. Further exposure of these cells to an alternating magnetic field increased the antitumor effect of nanocapsules. Moreover, the nanocapsules presented antitumor effect while not affecting normal cells. Magnetic properties of the nanocapsules were investigated by means of dc magnetization, ac susceptibility and electron spin resonance (ESR) measurements. The nanocapsules presented a typical superparamagnetic behavior around room temperature manifested itself by the split between zero field-cooled/field-cooled (ZFC/FC) magnetization curves and the absence of hysteresis on the field-dependent magnetization curve above the blocking temperature. Moreover, the blocking temperature decreased with increasing applied magnetic field. The superparamagnetic character of the nanocapsules was also confirmed by the occurrence of a maximum in temperature dependences of both real ′(T) and imaginary ′′ (T) components of the ac magnetic susceptibility, which shifted towards higher temperatures with increasing frequency. Additionally, upon decreasing the temperature the ESR signal shifted to lower fields and gradually broadened following closely the predictions for the ESR of superparamagnetoc nanoparticles. The observed superparamagnetic properties of nanocapsules enable their simple manipulation by means of magnetic field gradient, after introduction into the blood stream, which is a necessary condition for their use as magnetic drug carriers. The observed anticancer and superparamgnetic properties show that the magnetic nanocapsules loaded with organic selenium (IV) compounds should be considered as an effective material system for magnetic drug delivery and magnetohyperthermia inductor in antitumor therapy.

Keywords: cancer treatment, magnetic drug delivery system, nanomaterials, nanotechnology

Procedia PDF Downloads 180
248 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 246
247 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 412
246 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 267
245 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 107
244 Tailoring Structural, Thermal and Luminescent Properties of Solid-State MIL-53(Al) MOF via Fe³⁺ Cation Exchange

Authors: T. Ul Rehman, S. Agnello, F. M. Gelardi, M. M. Calvino, G. Lazzara, G. Buscarino, M. Cannas

Abstract:

Metal-Organic Frameworks (MOFs) have emerged as promising candidates for detecting metal ions owing to their large surface area, customizable porosity, and diverse functionalities. In recent years, there has been a surge in research focused on MOFs with luminescent properties. These frameworks are constructed through coordinated bonding between metal ions and multi-dentate ligands, resulting in inherent fluorescent structures. Their luminescent behavior is influenced by factors like structural composition, surface morphology, pore volume, and interactions with target analytes, particularly metal ions. MOFs exhibit various sensing mechanisms, including photo-induced electron transfer (PET) and charge transfer processes such as ligand-to-metal (LMCT) and metal-to-ligand (MLCT) transitions. Among these, MIL-53(Al) stands out due to its flexibility, stability, and specific affinity towards certain metal ions, making it a promising platform for selective metal ion sensing. This study investigates the structural, thermal, and luminescent properties of MIL-53(Al) metal-organic framework (MOF) upon Fe3+ cation exchange. Two separate sets of samples were prepared to activate the MOF powder at different temperatures. The first set of samples, referred to as MIL-53(Al), activated (120°C), was prepared by activating the raw powder in a glass tube at 120°C for 12 hours and then sealing it. The second set of samples, referred to as MIL-53(Al), activated (300°C), was prepared by activating the MIL-53(Al) powder in a glass tube at 300°C for 70 hours. Additionally, 25 mg of MIL-53(Al) powder was dispersed in 5 mL of Fe3+ solution at various concentrations (0.1-100 mM) for the cation exchange experiment. The suspension was centrifuged for five minutes at 10,000 rpm to extract MIL-53(Al) powder. After three rounds of washing with ultrapure water, MIL-53(Al) powder was heated at 120°C for 12 hours. For PXRD and TGA analyses, a sample of the obtained MIL-53(Al) was used. We also activated the cation-exchanged samples for time-resolved photoluminescence (TRPL) measurements at two distinct temperatures (120 and 300°C) for comparative analysis. Powder X-ray diffraction patterns reveal amorphization in samples with higher Fe3+ concentrations, attributed to alterations in coordination environments and ion exchange dynamics. Thermal decomposition analysis shows reduced weight loss in Fe3+-exchanged MOFs, indicating enhanced stability due to stronger metal-ligand bonds and altered decomposition pathways. Raman spectroscopy demonstrates intensity decrease, shape disruption, and frequency shifts, indicative of structural perturbations induced by cation exchange. Photoluminescence spectra exhibit ligand-based emission (π-π* or n-π*) and ligand-to-metal charge transfer (LMCT), influenced by activation temperature and Fe3+ incorporation. Quenching of luminescence intensity and shorter lifetimes upon Fe3+ exchange result from structural distortions and Fe3+ binding to organic linkers. In a nutshell, this research underscores the complex interplay between composition, structure, and properties in MOFs, offering insights into their potential for diverse applications in catalysis, gas storage, and luminescent devices.

Keywords: Fe³⁺ cation exchange, luminescent metal-organic frameworks (LMOFs), MIL-53(Al), solid-state analysis

Procedia PDF Downloads 30
243 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 92
242 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 45
241 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).

Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance

Procedia PDF Downloads 137
240 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score

Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah

Abstract:

Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.

Keywords: BOBI, burns, FLAMES, scoring systems, outcome

Procedia PDF Downloads 307
239 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 417
238 Development of PCL/Chitosan Core-Shell Electrospun Structures

Authors: Hilal T. Sasmazel, Seda Surucu

Abstract:

Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.

Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold

Procedia PDF Downloads 460
237 The Effect of Technology on Skin Development and Progress

Authors: Haidy Weliam Megaly Gouda

Abstract:

Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.

Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification

Procedia PDF Downloads 25
236 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers

Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei

Abstract:

Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.

Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce

Procedia PDF Downloads 241
235 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults

Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva

Abstract:

Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.530 kg.m-2) underwent a clinical evaluation, including the six-minute step test (ST), a well-validated and reliable test for young people. Body composition assessment was done by a tetrapolar bioimpedance in a fasting state and in the folicular phase for women. A maximal cardiopulmonary exercise testing, as well as the ST, evaluated the oxygen uptake at the peak of the test (VO2peak) by an ergospirometer Oxycon Mobile. Lipids, glucose, insulin were analysed and the ELISA method quantified the serum leptin and adiponectin from blood samples. Volunteers were divided in two groups: AR or MH according to a HI cutoff of 1.95, which was previously determined in the literature. T-test for comparison between groups, Pearson´s test to correlate main variables and ROC analysis for discriminating AR from up-and-down cycles in ST (SC) were applied (p<0.05). Results: Higher LA, fat mass (FM) and lower HDL, SC, leg lean mass (LM) and VO2peak were found in AR than in MH. Significant correlations were found between VO2peak and SC (r= 0.80) as well as between LA and FM (r=0.87), VO2peak (r=-0.73), and SC (r=-0.65). Area under de curve showed moderate accuracy (0.75) of SC <173 to discriminate AR phenotype. Conclusion: Our study found that at risk obese and normal-weight subjects showed an unhealthy metabolism as well as a poor CRF and functional daily activity capacity. Additionally, a simple and less costly functional test associated with above-mentioned aspects is able to identify ‘at risk’ subjects for primary intervention with important clinical and health implications.

Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST

Procedia PDF Downloads 326
234 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 201
233 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features

Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski

Abstract:

The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.

Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation

Procedia PDF Downloads 91
232 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 107
231 Synthesis and Luminescent Properties of Barium-Europium (III) Silicate Systems

Authors: A. Isahakyan, A. Terzyan, V. Stepanyan, N. Zulumyan, H. Beglaryan

Abstract:

Previous studies have shown that the involvement of silica hydrogel derived from serpentine minerals (Mg(Fe))₆ [Si₄O₁₀](OH)₈ as a source of silicon dioxide in SiO₂–NaOH–BaCl₂–H₂O system results in precipitating via one-hour stirring of boiling suspension such intermediates that on heating up to the temperature of 800  °C crystallize into the product composed of barium ortho- Ba₂SiO₄ and metasilicates BaSiO₃. Taking into account the fact that the suggested precipitation method based on the silica hydrogel mentioned allowed avoiding a number of drawbacks related with tetraethoxysilane Si(OC₂H₅)₄ frequently used in sol-gel routes, this approach has been decided to be adapted to inserting europium (III) Eu³+ ions into the structure of the synthesized compounds. A series of experiments was performed for the investigation of optical properties evolution observable in the final samples. Intermediates previously precipitated in SiO₂·H₂O (silica hydrogel)–NaOH–BaCl₂–Eu(NO₃)₃ system via stirring for 60 min at room temperature underwent one-hour heat-treatment at different temperatures (6001200 °C). When the silica hydrogel was metered, SiO₂ content in the silica hydrogel that is 5.8 % was taken into consideration in order to guaranty the molar ratios of both SiO₂ to BaO and SiO₂ to Na₂O equal to 1:2. BaCl₂ and Eu(NO₃)₃ reagents were weighted so that the formation of appropriate compositions was guaranteed. A number of samples including various concentrations of Eu³+ ions (1.25, 2.5, 3.75, 5, 6.35, 8.65, 10, 17.5, 18.75 and 20 mol%) has been synthesized by the described method. Luminescence excitation, emission spectra of the final products were recorded on the Agilent Cary Eclipes fluorescence spectrophotometer (scanning rate = 30 nm/min, slit width = 5 nm, and Voltage = 800 V) as the excitation source. X-ray powder diffraction (XRPD) measurements were made on the SmartLab SE diffractometer. Emission spectra recorded for all the samples at an excitation wavelength of 394 nm exhibit peaks centered at around 536, 555, 587, 614, 653, 690 and 702.5 nm. The most intensive emission peak is observed at 614 nm due to 5D0 →7F2 of europium (III) ions transition. Luminescence intensity achieves its maximum for Eu³+17.5 mol% and heat-treatment at 1200 °C. The XRPD patterns revealed that the diffraction peaks recorded for this sample are identical to NaBa₆Nd(SiO₄)₄ reflections. As Nd-containing reagents were not involved into the synthesis, the maximum luminescent intensity is most likely to be conditioned by NaBa₆Eu(SiO₄)₄ formation whose reflections are not available in the ICDD-JCPDS database of crystallographic 2024. Up to Eu3+2.5 mol% the samples demonstrate the phases corresponding to Ba₂SiO₄ and BaSiO₃ standards. Subsequent increasing of europium (III) concentration in the system leads to NaBa₆Eu(SiO₄)₄ formation along with Ba₂SiO4 and BaSiO3. NaBa₆Eu(SiO₄)₄ share gradually increases and starting from 17.5 mol% and more NaBa₆Eu(SiO₄)₄ phase is only registered. Thus, the variation of europium (III) concentration in silica hydrogel–NaOH–BaCl₂–Eu(NO₃)₃ system allows producing by the precipitation method the products composed of europium (III)-doped Ba₂SiO₄ and BaSiO₃ and/or NaBa₆Eu(SiO₄)₄ distinguished by different luminescent properties. The work was supported by the Science Committee of RA, in the frames of the research project № 21T-1D131.

Keywords: europium (III)-doped barium ortho- Ba2SiO4 and metasilicates BaSiO₃, NaBa₆Eu(SiO₄)₄, luminescence, precipitation method

Procedia PDF Downloads 10