Search results for: fast vessel
419 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City
Authors: Sultan Ahmad Azizi, Gaurang J. Joshi
Abstract:
Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport
Procedia PDF Downloads 260418 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis
Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu
Abstract:
Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter
Procedia PDF Downloads 167417 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis
Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik
Abstract:
Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.Keywords: biomarker, glycosylation, lectin, prostate cancer
Procedia PDF Downloads 406416 Application of Neutron-Gamma Technologies for Soil Elemental Content Determination and Mapping
Authors: G. Yakubova, A. Kavetskiy, S. A. Prior, H. A. Torbert
Abstract:
In-situ soil carbon determination over large soil surface areas (several hectares) is required in regard to carbon sequestration and carbon credit issues. This capability is important for optimizing modern agricultural practices and enhancing soil science knowledge. Collecting and processing representative field soil cores for traditional laboratory chemical analysis is labor-intensive and time-consuming. The neutron-stimulated gamma analysis method can be used for in-situ measurements of primary elements in agricultural soils (e.g., Si, Al, O, C, Fe, and H). This non-destructive method can assess several elements in large soil volumes with no need for sample preparation. Neutron-gamma soil elemental analysis utilizes gamma rays issued from different neutron-nuclei interactions. This process has become possible due to the availability of commercial portable pulse neutron generators, high-efficiency gamma detectors, reliable electronics, and measurement/data processing software complimented by advances in state-of-the-art nuclear physics methods. In Pulsed Fast Thermal Neutron Analysis (PFTNA), soil irradiation is accomplished using a pulsed neutron flux, and gamma spectra acquisition occurs both during and between pulses. This method allows the inelastic neutron scattering (INS) gamma spectrum to be separated from the thermal neutron capture (TNC) spectrum. Based on PFTNA, a mobile system for field-scale soil elemental determinations (primarily carbon) was developed and constructed. Our scanning methodology acquires data that can be directly used for creating soil elemental distribution maps (based on ArcGIS software) in a reasonable timeframe (~20-30 hectares per working day). Created maps are suitable for both agricultural purposes and carbon sequestration estimates. The measurement system design, spectra acquisition process, strategy for acquiring field-scale carbon content data, and mapping of agricultural fields will be discussed.Keywords: neutron gamma analysis, soil elemental content, carbon sequestration, carbon credit, soil gamma spectroscopy, portable neutron generators, ArcMap mapping
Procedia PDF Downloads 90415 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids
Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho
Abstract:
In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model
Procedia PDF Downloads 115414 Importance of an E-Learning Program in Stress Field for Postgraduate Courses of Doctors
Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau
Abstract:
Background: Preparing in the stress field (SF) is, increasingly, a concern for doctors of different specialties. Aims: The aim was to evaluate the importance of an e-learning program for doctors postgraduate courses, in SF. Methods: Doctors (n= 40 male, 40 female) of different specialties and ages (31-71 years), who attended postgraduate courses in SF, voluntarily responded to a questionnaire that included the following themes: Importance of SF courses for specialty practiced by each respondent doctor (using visual analogue scale, VAS); What SF themes would be indicated as e-learning (EL); Preferred form of SF information assimilation: Classical lectures (CL), EL or a combination of these methods (CL+EL); Which information on the SF course are facilitated by EL model versus CL; In their view which are the first four advantages and the first four disadvantages of EL compared to CL, for SF. Results: To most respondents, the SF courses are important for the specialty they practiced (VAS by an average of 4). The SF themes suggested to be done as EL were: Stress mechanisms; stress factor models for different medical specialties; stress assessment methods; primary stress management methods for different specialties. Preferred form of information assimilation was CL+EL. Aspects of the course facilitated by EL versus CL model: Active reading of theoretical information, with fast access to keywords details; watching documentaries in everyone's favorite order; practice through tests and the rapid control of results. The first four EL advantages, mentioned for SF were: Autonomy in managing the time allocated to the study; saving time for traveling to the venue; the ability to read information in various contexts of time and space; communication with colleagues, in good times for everyone. The first three EL disadvantages, mentioned for SF were: It decreases capabilities for group discussion and mobilization for active participation; EL information accession may depend on electrical source or/and Internet; learning slowdown can appear, by temptation of postponing the implementation. Answering questions was partially influenced by the respondent's age and genre. Conclusions: 1) Post-graduate courses in SF are of interest to doctors of different specialties. 2) The majority of participating doctors preferred EL, but combined with CL (CL+EL). 3) Preference for EL was manifested mainly by young or middle age men doctors. 4) It is important to balance the proper formula for chosen EL, to be the most efficient, interesting, useful and agreeable.Keywords: stress field, doctors’ postgraduate courses, classical lectures, e-learning lecture
Procedia PDF Downloads 238413 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality
Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas
Abstract:
Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy
Procedia PDF Downloads 325412 Determination of Circulating Tumor Cells in Breast Cancer Patients by Electrochemical Biosensor
Authors: Gökçe Erdemir, İlhan Yaylım, Serap Erdem-Kuruca, Musa Mutlu Can
Abstract:
It has been determined that the main reason for the death of cancer disease is caused by metastases rather than the primary tumor. The cells that leave the primary tumor and enter the circulation and cause metastasis in the secondary organs are called "circulating tumor cells" (CTCs). The presence and number of circulating tumor cells has been associated with poor prognosis in many major types of cancer, including breast, prostate, and colorectal cancer. It is thought that knowledge of circulating tumor cells, which are seen as the main cause of cancer-related deaths due to metastasis, plays a key role in the diagnosis and treatment of cancer. The fact that tissue biopsies used in cancer diagnosis and follow-up are an invasive method and are insufficient in understanding the risk of metastasis and the progression of the disease have led to new searches. Liquid biopsy tests performed with a small amount of blood sample taken from the patient for the detection of CTCs are easy and reliable, as well as allowing more than one sample to be taken over time to follow the prognosis. However, since these cells are found in very small amounts in the blood, it is very difficult to capture them and specially designed analytical techniques and devices are required. Methods based on the biological and physical properties of the cells are used to capture these cells in the blood. Early diagnosis is very important in following the prognosis of tumors of epithelial origin such as breast, lung, colon and prostate. Molecules such as EpCAM, vimentin, and cytokeratins are expressed on the surface of cells that pass into the circulation from very few primary tumors and reach secondary organs from the circulation, and are used in the diagnosis of cancer in the early stage. For example, increased EpCAM expression in breast and prostate cancer has been associated with prognosis. These molecules can be determined in some blood or body fluids to be taken from patients. However, more sensitive methods are required to be able to determine when they are at a low level according to the course of the disease. The aim is to detect these molecules found in very few cancer cells with the help of sensitive, fast-sensing biosensors, first in breast cancer cells reproduced in vitro and then in blood samples taken from breast cancer patients. In this way, cancer cells can be diagnosed early and easily and effectively treated.Keywords: electrochemical biosensors, breast cancer, circulating tumor cells, EpCAM, Vimentin, Cytokeratins
Procedia PDF Downloads 261411 Psychological Aspects of Quality of Life in Patients with Primary and Metastatic Bone Tumors
Authors: O. Yu Shchelkova, E. B. Usmanova
Abstract:
Introduction: Last decades scientific research of quality of life (QoL) is developing fast worldwide. QoL concept pays attention to emotional experience of disease in patients, particularly to personal sense of possibility to satisfy actual needs and possibility of full social functioning in spite of disease limitations. QoL in oncological patients is studied intensively. Nevertheless, the issue of QoL in patients with bone tumors focused on psychological factors of QoL and relation to disease impact on QoL is not discussed. The aim of the study was to reveal the basic aspects and personality factors of QoL in patients with bone tumor. Results: Study participants were 139 patients with bone tumors. The diagnoses were osteosarcoma (n=42), giant cell tumor (n=32), chondrosarcoma (n=32), Ewing sarcoma (n=10) and bone metastases (n=23). The study revealed that patients with bone metastases assess their health significantly worse than other patients. Besides patients with osteosarcoma evaluate their general health higher than patients with giant cell tumors. Social functioning in patients with chondrosarcoma is higher than in patients with bone metastases and patients with giant cell tumor. Patients with chondrosarcoma have higher physical functioning and less restricted in daily activities than patients with bone metastases. Patients with bone metastases characterize their pain as more widespread than patients with primary bone tumors and have more functional restrictions due to bone incision. Moreover, the study revealed personality significant influence on QoL related to bone tumors. Such characteristics in structure of personality as high degree of self-consciousness, personal resources, cooperation and disposition to positive reappraisal in difficult situation correspond to higher QoL. Otherwise low personal resources and slight problem solving behaviour, low degree of self-consciousness and high social dependence correspond to decrease of QoL in patients with bone tumors. Conclusion: Patients with bone metastasis have lower QoL compared to patients with primary bone tumors. Patients with giant cell tumor have the worth quality of life among patients with primary bone tumors. Furthermore, the results revealed differences in QoL parameters associated with personality characteristics in patients with bone tumors. Such psychological factors as future goals, interest in life and emotional saturation, besides high degree of personal resources and cooperation influence on increasing QoL in patients with bone tumors.Keywords: quality of life, psychological factors, bone tumor, personality
Procedia PDF Downloads 140410 Exploring the Potential of Modular Housing Designs for the Emergency Housing Need in Türkiye after the February Earthquake in 2023
Authors: Hailemikael Negussie, Sebla Arın Ensarioğlu
Abstract:
In February 2023 Southeastern Türkiye and Northwestern Syria were hit by two consecutive earthquakes with high magnitude leaving thousands dead and thousands more homeless. The housing crisis in the affected areas has resulted in the need for a fast and qualified solution. There are a number of solutions, one of which is the use of modular designs to rebuild the cities that have been affected. Modular designs are prefabricated building components that can be quickly and efficiently assembled on-site, making them ideal to build structures with faster speed and higher quality. These structures are flexible, adaptable, and can be customized to meet the specific needs of the inhabitants, in addition to being more energy-efficient and sustainable. The prefabricated nature also assures that the quality of the products can be easily controlled. The reason for the collapse of most of the buildings during the earthquakes was found out to be the lack of quality during the construction stage. Using modular designs allows a higher control over the quality of the construction materials being used. The use of modular designs for a project of this scale presents some challenges, including the high upfront cost to design and manufacture components. However, if implemented correctly, modular designs can offer an effective and efficient solution to the urgent housing needs. The aim of this paper is to explore the potential of modular housing for mid- and long-term earthquake-resistant housing needs in the affected disaster zones after the earthquakes of February 2023. In the scope of this paper the adaptability of modular, prefabricated housing designs for the post-disaster environment, the advantages and disadvantages of this system will be examined. Elements such as; the current conditions of the region where the destruction happened, climatic data, topographic factors will be examined. Additionally, the paper will examine; examples of similar local and international modular post-earthquake housing projects. The region is projected to enter a rapid reconstruction phase in the following periods. Therefore, this paper will present a proposal for a system that can be used to produce safe and healthy urbanization policies without causing new aggrievements while meeting the housing needs of the people in the affected regions.Keywords: post-disaster housing, earthquake-resistant design, modular design, housing, Türkiye
Procedia PDF Downloads 88409 Understanding the Processwise Entropy Framework in a Heat-powered Cooling Cycle
Authors: P. R. Chauhan, S. K. Tyagi
Abstract:
Adsorption refrigeration technology offers a sustainable and energy-efficient cooling alternative over traditional refrigeration technologies for meeting the fast-growing cooling demands. With its ability to utilize natural refrigerants, low-grade heat sources, and modular configurations, it has the potential to revolutionize the cooling industry. Despite these benefits, the commercial viability of this technology is hampered by several fundamental limiting constraints, including its large size, low uptake capacity, and poor performance as a result of deficient heat and mass transfer characteristics. The primary cause of adequate heat and mass transfer characteristics and magnitude of exergy loss in various real processes of adsorption cooling system can be assessed by the entropy generation rate analysis, i. e. Second law of Thermodynamics. Therefore, this article presents the second law of thermodynamic-based investigation in terms of entropy generation rate (EGR) to identify the energy losses in various processes of the HPCC-based adsorption system using MATLAB R2021b software. The adsorption technology-based cooling system consists of two beds made up of silica gel and arranged in a single stage, while the water is employed as a refrigerant, coolant, and hot fluid. The variation in process-wise EGR is examined corresponding to cycle time, and a comparative analysis is also presented. Moreover, the EGR is also evaluated in the external units, such as the heat source and heat sink unit used for regeneration and heat dump, respectively. The research findings revealed that the combination of adsorber and desorber, which operates across heat reservoirs with a higher temperature gradient, shares more than half of the total amount of EGR. Moreover, the EGR caused by the heat transfer process is determined to be the highest, followed by a heat sink, heat source, and mass transfer, respectively. in case of heat transfer process, the operation of the valve is determined to be responsible for more than half (54.9%) of the overall EGR during the heat transfer. However, the combined contribution of the external units, such as the source (18.03%) and sink (21.55%), to the total EGR, is 35.59%. The analysis and findings of the present research are expected to pinpoint the source of the energy waste in HPCC based adsorption cooling systems.Keywords: adsorption cooling cycle, heat transfer, mass transfer, entropy generation, silica gel-water
Procedia PDF Downloads 107408 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel
Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani
Abstract:
Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics
Procedia PDF Downloads 118407 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines
Authors: Bruna P. De Souza, Ricardo C. De Almeida
Abstract:
In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind
Procedia PDF Downloads 141406 Evaluation of the Phenolic Composition of Curcumin from Different Turmeric (Curcuma longa L.) Extracts: A Comprehensive Study Based on Chemical Turmeric Extract, Turmeric Tea and Fresh Turmeric Juice
Authors: Beyza Sukran Isik, Gokce Altin, Ipek Yalcinkaya, Evren Demircan, Asli Can Karaca, Beraat Ozcelik
Abstract:
Turmeric (Curcuma longa L.), is used as a food additive (spice), preservative and coloring agent in Asian countries, including China and South East Asia. It is also considered as a medicinal plant. Traditional Indian medicine evaluates turmeric powder for the treatment of biliary disorders, rheumatism, and sinusitis. It has rich polyphenol content. Turmeric has yellow color mainly because of the presence of three major pigments; curcumin 1,7-bis(4-hydroxy-3-methoxyphenyl)-1, 6-heptadiene-3,5-dione), demethoxy-curcumin and bis demothoxy-curcumin. These curcuminoids are recognized to have high antioxidant activities. Curcumin is the major constituent of Curcuma species. Method: To prepare turmeric tea, 0.5 gram of turmeric powder was brewed with 250 ml of water at 90°C, 10 minutes. 500 grams of fresh turmeric washed and shelled prior to squeezing. Both turmeric tea and turmeric juice pass through 45 lm filters and stored at -20°C in the dark for further analyses. Curcumin was extracted from 20 grams of turmeric powder by 70 ml ethanol solution (95:5 ethanol/water v/v) in a water bath at 80°C, 6 hours. Extraction was contributed for 2 hours at the end of 6 hours by addition of 30 ml ethanol. Ethanol was removed by rotary evaporator. Remained extract stored at -20°C in the dark. Total phenolic content and phenolic profile were determined by spectrophotometric analysis and ultra-fast liquid chromatography (UFLC), respectively. Results: The total phenolic content of ethanolic extract of turmeric, turmeric juice, and turmeric tea were determined 50.72, 31.76 and 29.68 ppt, respectively. The ethanolic extract of turmeric, turmeric juice, and turmeric tea have been injected into UFLC and analyzed for curcumin contents. The curcumin content in ethanolic extract of turmeric, turmeric juice, and turmeric tea were 4067.4, 156.7 ppm and 1.1 ppm, respectively. Significance: Turmeric is known as a good source of curcumin. According to the results, it can be stated that its tea is not sufficient way for curcumin consumption. Turmeric juice can be preferred to turmeric tea for higher curcumin content. Ethanolic extract of turmeric showed the highest content of turmeric in both spectrophotometric and chromatographic analyses. Nonpolar solvents and carriers which have polar binding sites have to be considered for curcumin consumption due to its nonpolar nature.Keywords: phenolic compounds, spectrophotometry, turmeric, UFLC
Procedia PDF Downloads 200405 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects
Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi
Abstract:
Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC
Procedia PDF Downloads 137404 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 136403 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis
Authors: Sara Segura, Diego Nuñez, Miryam Villamil
Abstract:
In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis
Procedia PDF Downloads 294402 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal
Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das
Abstract:
The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.Keywords: briquetting reduction, lean grade coal, smelting reduction, TMO
Procedia PDF Downloads 319401 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project
Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson
Abstract:
The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction
Procedia PDF Downloads 80400 Limbic Involvement in Visual Processing
Authors: Deborah Zelinsky
Abstract:
The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing
Procedia PDF Downloads 85399 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 166398 3D-Printing Compressible Macroporous Polymer Using Poly-Pickering-High Internal Phase Emulsions as Micromixer
Authors: Hande Barkan-Ozturk, Angelika Menner, Alexander Bismarck
Abstract:
Microfluidic mixing technology grew rapidly in the past few years due to its many advantages over the macro-scale mixing, especially the ability to use small amounts of internal volume and also very high surface-to-volume ratio. The Reynold number identify whether the mixing is operated by the laminar or turbulence flow. Therefore, mixing with very fast kinetic can be achieved by diminishing the channel dimensions to decrease Reynold number and the laminar flow can be accomplished. Moreover, by using obstacles in the micromixer, the mixing length and the contact area between the species have been increased. Therefore, the channel geometry and its surface property have great importance to reach satisfactory mixing results. Since poly(-merised) High Internal Phase Emulsions (polyHIPEs) have more than 74% porosity and their pores are connected each other with pore throats, which cause high permeability, they are ideal candidate to build a micromixer. The HIPE precursor is commonly produced by using an overhead stirrer to obtain relatively large amount of emulsion in batch process. However, we will demonstrate that a desired amount of emulsion can be prepared continuously with micromixer build from polyHIPE, and such HIPE can subsequently be employed as ink in 3D printing process. In order to produce the micromixer a poly-Pickering(St-co-DVB)HIPE with 80% porosity was prepared with modified silica particles as stabilizer and surfactant Hypermer 2296 to obtain open porous structure and after coating of the surface, the three 1/16' ' PTFE tubes to transfer continuous (CP) and internal phases (IP) and the other is to collect the emulsion were placed. Afterwards, the two phases were injected in the ratio 1:3 CP:IP with syringe dispensers, respectively, and highly viscoelastic H(M)IPE, which can be used as an ink in 3D printing process, was gathered continuously. After the polymerisation of the resultant emulsion, polyH(M)IPE has interconnected porous structure identical to the monolithic polyH(M)IPE indicating that the emulsion can be prepared constantly with poly-Pickering-HIPE as micromixer and it can be used to prepare desired pattern with a 3D printer. Moreover, the morphological properties of the emulsion can be adjustable by changing flow ratio, flow speed and structure of the micromixer.Keywords: 3D-Printing, emulsification, macroporous polymer, micromixer, polyHIPE
Procedia PDF Downloads 162397 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan
Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin
Abstract:
Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.Keywords: energy, stream network, basins, SWAT, evapotranspiration
Procedia PDF Downloads 221396 Multi-Walled Carbon Nanotubes Doped Poly (3,4 Ethylenedioxythiophene) Composites Based Electrochemical Nano-Biosensor for Organophosphate Detection
Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar
Abstract:
One of the most publicized and controversial issue in crop production is the use of agrichemicals- also known as pesticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. Therefore, detection of OPs is very necessary for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared PEDOT-MWCNT/FTO and AChE/PEDOT-MWCNT/FTO nano-biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Electrochemical studies were done using Cyclic Voltammetry (CV) or Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS). Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared nano-biosensor is observed to be 30 days and seven times, respectively. The application of the developed nano-biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed nano-biosensor made them reliable, sensitive and a low cost process.Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, nano-biosensor, oxime (2-PAM)
Procedia PDF Downloads 435395 Applications of Artificial Intelligence (AI) in Cardiac imaging
Authors: Angelis P. Barlampas
Abstract:
The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine
Procedia PDF Downloads 78394 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels
Authors: Joshua Buli, David Pietrowski, Samuel Britton
Abstract:
Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization
Procedia PDF Downloads 85393 Treatment of Full-Thickness Rotator Cuff Tendon Tear Using Umbilical Cord Blood-Derived Mesenchymal Stem Cells and Polydeoxyribonucleotides in a Rabbit Model
Authors: Sang Chul Lee, Gi-Young Park, Dong Rak Kwon
Abstract:
Objective: The aim of this study was to investigate regenerative effects of ultrasound (US)-guided injection with human umbilical cord blood-derived mesenchymal stem cells (UCB-MSCs) and/or polydeoxyribonucleotide (PDRN) injection in a chronic traumatic full-thickness rotator cuff tendon tear (FTRCTT) in a rabbit model. Material and Methods: Rabbits (n = 32) were allocated into 4 groups. After a 5-mm sized FTRCTT just proximal to the insertion site on the subscapularis tendon was created by excision, the wound was immediately covered by silicone tube to prevent natural healing. After 6 weeks, 4 injections (0.2 mL normal saline, G1; 0.2 mL PDRN, G2; 0.2 mL UCB-MSCs, G3; and 0.2 mL UCB-MSCs with 0.2ml PDRN, G4) were injected into FTRCTT under US guidance. We evaluated gross morphologic changes on all rabbits after sacrifice. Masson’s trichrome, anti-type 1 collagen antibody, bromodeoxyuridine, proliferating cell nuclear antigen, vascular endothelial growth factor and platelet endothelial cell adhesion molecule stain were performed to evaluate histological changes. Motion analysis was also performed. Results: The gross morphologic mean tendon tear size in G3 and 4 was significantly smaller than that of G1 and 2 (p < .05). However, there were no significant differences in tendon tear size between G3 and 4. In G4, newly regenerated collagen type 1 fibers, proliferating cells activity, angiogenesis, walking distance, fast walking time, and mean walking speed were greater than in the other three groups on histological examination and motion analysis. Conclusion: Co-injection of UCB-MSCs and PDRN was more effective than UCB-MSCs injection alone in histological and motion analysis in a rabbit model of chronic traumatic FTRCTT. However, there was no significant difference in gross morphologic change of tendon tear between UCB-MSCs with/without PDRN injection. The results of this study regarding the combination of UCB-MSCs and PDRN are worth additional investigations.Keywords: mesenchymal stem cell, umbilical cord, polydeoxyribonucleotides, shoulder, rotator cuff, ultrasonography, injections
Procedia PDF Downloads 185392 Genetic Improvement Potential for Wood Production in Melaleuca cajuputi
Authors: Hong Nguyen Thi Hai, Ryota Konda, Dat Kieu Tuan, Cao Tran Thanh, Khang Phung Van, Hau Tran Tin, Harry Wu
Abstract:
Melaleuca cajuputi is a moderately fast-growing species and considered as a multi-purpose tree as it provides fuelwood, piles and frame poles in construction, leaf essential oil and honey. It occurs in Australia, Papua New Guinea, and South-East Asia. M. cajuputi plantation can be harvested on 6-7 year rotations for wood products. Its timber can also be used for pulp and paper, fiber and particle board, producing quality charcoal and potentially sawn timber. However, most reported M. cajuputi breeding programs have been focused on oil production rather than wood production. In this study, breeding program of M. cajuputi aimed to improve wood production was examined by estimating genetic parameters for growth (tree height, diameter at breast height (DBH), and volume), stem form, stiffness (modulus of elasticity (MOE)), bark thickness and bark ratio in a half-sib family progeny trial including 80 families in the Mekong Delta of Vietnam. MOE is one of the key wood properties of interest to the wood industry. Non-destructive wood stiffness was measured indirectly by acoustic velocity using FAKOPP Microsecond Timer and especially unaffected by bark mass. Narrow-sense heritability for the seven traits ranged from 0.13 to 0.27 at age 7 years. MOE and stem form had positive genetic correlations with growth while the negative correlation between bark ratio and growth was also favorable. Breeding for simultaneous improvement of multiple traits, faster growth with higher MOE and reduction of bark ratio should be possible in M. cajuputi. Index selection based on volume and MOE showed genetic gains of 31 % in volume, 6 % in MOE and 13 % in stem form. In addition, heritability and age-age genetic correlations for growth traits increased with time and optimal early selection age for growth of M. cajuputi based on DBH alone was 4 years. Selected thinning resulted in an increase of heritability due to considerable reduction of phenotypic variation but little effect on genetic variation.Keywords: acoustic velocity, age-age correlation, bark thickness, heritability, Melaleuca cajuputi, stiffness, thinning effect
Procedia PDF Downloads 182391 Factors Associated with Increase of Diabetic Foot Ulcers in Diabetic Patients in Nyahururu County Hospital
Authors: Daniel Wachira
Abstract:
The study aims to determine factors contributing to increasing rates of DFU among DM patients attending clinics in Nyahururu County referral hospital, Lakipia County. The study objectives include;- To determine the demographic factors contributing to increased rates of DFU among DM patients, determining the sociocultural factors that contribute to increased rates of DFU among DM patients and determining the health facility factors contributing to increased rates of DFU among DM patients attending DM clinic at Nyahururu county referral hospital, Laikipia County. This study will adopt a descriptive cross-sectional study design. It involves the collection of data at a one-time point without follow-up. This method is fast and inexpensive, there is no loss to follow up as the data is collected at one time point and associations between variables can be determined. The study population includes all DM patients with or without DFU. The sampling technique that will be used is the probability sampling method, a simple random method of sampling will be used. The study will employ the use of questionnaires to collect the required information. Questionnaires will be a research administered questionnaires. The questionnaire developed was done in consultation with other research experts (supervisor) to ensure reliability. The questionnaire designed will be pre-tested by hand delivering them to a sample 10% of the sample size at J.M Kariuki Memorial hospital, Nyandarua county and thereafter collecting them dully filled followed by refining of errors to ensure it is valid for collection of data relevant for this study. Refining of errors on the questionnaires to ensure it was valid for collection of data relevant for this study. Data collection will begin after the approval of the project. Questionnaires will be administered only to the participants who met the selection criteria by the researcher and those who agreed to participate in the study to collect key information with regard to the objectives of the study. The study's authority will be obtained from the National Commission of Science and Technology and Innovation. Permission will also be obtained from the Nyahururu County referral hospital administration staff. The purpose of the study will be explained to the respondents in order to secure informed consent, and no names will be written on the questionnaires. All the information will be treated with maximum confidentiality by not disclosing who the respondent was and the information.Keywords: diabetes, foot ulcer, social factors, hospital factors
Procedia PDF Downloads 16390 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis
Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack
Abstract:
Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).Keywords: radiolysis, spent fuel, hydrogen, cyclotron
Procedia PDF Downloads 521