Search results for: authoring tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4811

Search results for: authoring tool

311 Exogenous Application of Silicon through the Rooting Medium Modulate Growth, Ion Uptake, and Antioxidant Activity of Barley (Hordeum vulgare L.) Under Salt Stress

Authors: Sibgha Noreen, Muhammad Salim Akhter, Seema Mahmood

Abstract:

Salt stress is an abiotic stress that causes a heavy toll on growth and development and also reduces the productivity of arable and horticultural crops. Globally, a quarter of total arable land has fallen prey to this menace, and more is being encroached because of the usage of brackish water for irrigation purposes. Though barley is categorized as salt-tolerant crop, but cultivars show a wide genetic variability in response to it. In addressing salt stress, silicon nutrition would be a facile tool for enhancing salt tolerant to sustain crop production. A greenhouse study was conducted to evaluate the response of barley (Hordeum vulgare L.) cultivars to silicon nutrition under salt stress. The treatments included [(a) four barley cultivars (Jou-87, B-14002, B-14011, B-10008); (b) two salt levels (0, 200 mM, NaCl); and (c) two silicon levels (0, 200ppm, K2SiO3. nH2O), arranged in a factorial experiment in a completely randomized design with 16 treatments and repeated 4 times. Plants were harvested at 15 days after exposure to different experimental salinity and silicon foliar conditions. Results revealed that various physiological and biochemical attributes differed significantly (p<0.05) in response to different treatments and their interactive effects. Cultivar “B-10008” excelled in biological yield, chlorophyll constituents, antioxidant enzymes, and grain yield compared to other cultivars. The biological yield of shoot and root organs was reduced by 27.3 and 26.5 percent under salt stress, while it was increased by 14.5 and 18.5 percent by exogenous application of silicon over untreated check, respectively. The imposition of salt stress at 200 mM caused a reduction in total chlorophyll content, chl ‘a’ , ‘b’ and ratio a/b by 10.6,16.8,17.1 and 7.1, while spray of 200 ppm silicon improved the quantum of the constituents by 10.4,12.1,10.2,10.3 over untreated check, respectively. The quantum of free amino acids and protein content was enhanced in response to salt stress and the spray of silicon nutrients. The amounts of superoxide dismutase, catalases, peroxidases, hydrogen peroxide, and malondialdehyde contents rose to 18.1, 25.7, 28.1, 29.5, and 17.6 percent over non-saline conditions under salt stress. However, the values of these antioxidants were reduced in proportion to salt stress by 200 ppm silicon applied as rooting medium on barley crops. The salt stress caused a reduction in the number of tillers, number of grains per spike, and 100-grain weight to the amount of 29.4, 8.6, and 15.8 percent; however, these parameters were improved by 7.1, 10.3, and 9.6 percent by foliar spray of silicon over untreated crop, respectively. It is concluded that the barley cultivar “B-10008” showed greater tolerance and adaptability to saline conditions. The yield of barley crops could be potentiated by a foliar spray of 200 ppm silicon at the vegetative growth stage under salt stress.

Keywords: salt stress, silicon nutrition, chlorophyll constituents, antioxidant enzymes, barley crop

Procedia PDF Downloads 9
310 Biotechnology Approach: A Tool of Enhancement of Sticky Mucilage of Pulicaria Incisa (Medicinal Plant) for Wounds Treatment

Authors: Djamila Chabane, Asma Rouane, Karim Arab

Abstract:

Depending of the chemical substances responsible for the pharmacological effects, a future therapeutic drug might be produced by extraction from whole plants or by callus initiated from some parts. The optimized callus culture protocols now offer the possibility to use cell culture techniques for vegetative propagation and open minds for further studies on secondary metabolites and drug establishment. In Algerian traditional medicine, Pulicaria incisa (Asteraceae) is used in the treatment of daily troubles (stomachache, headhache., cold, sore throat and rheumatic arthralgia). Field findings revealed that many healers use some fresh parts (leaves, flowers) of this plant to treat skin wounds. This study aims to evaluate the healing efficiency of artisanal cream prepared from sticky mucilage isolated from calluses on dermal wounds of animal models. Callus cultures were initiated from reproductive explants (young inflorescences) excised from adult plants and transferred to a MS basal medium supplemented with growth regulators and maintained under dark for for months. Many calluses types were obtained with various color and aspect (friable, compact). Several subcultures of calli were performed to enhance the mucilage accumulation. After extraction, the mucilage extracts were tested on animal models as follows. The wound healing potential was studied by causing dermal wounds (1 cm diameter) at the dorsolumbar part of Rattus norvegicus; different samples of the cream were applied after hair removal on three rats each, including two controls (one treated by Vaseline and one without any treatment), two experimental groups (experimental group 1, treated with a reference ointment "Madecassol® and experimental group 2 treated by callus mucilage cream for a period of seventeen days. The evolution of the healing activity was estimated by calculating the percentage reduction of the area wounds treated by all compounds tested compared to the controls by using AutoCAD software. The percentage of healing effect of the cream prepared from callus mucilage was (99.79%) compared to that of Madecassol® (99.76%). For the treatment time, the significant healing activity was observed after 17 days compared to that of the reference pharmaceutical products without any wound infection. The healing effect of Madecassol® is more effective because it stimulates and regulates the production of collagen, a fibrous matrix essential for wound healing. Mucilage extracts also showed a high capacity to heal the skin without any infection. According to this pharmacological activity, we suggest to use calluses produced by in vitro culture to producing new compounds for the skin care and treatment.

Keywords: calluses, Pulicaria incisa, mucilage, Wounds

Procedia PDF Downloads 98
309 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands

Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen

Abstract:

Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.

Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay

Procedia PDF Downloads 103
308 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 194
307 Procedure for Monitoring the Process of Behavior of Thermal Cracking in Concrete Gravity Dams: A Case Study

Authors: Adriana de Paula Lacerda Santos, Bruna Godke, Mauro Lacerda Santos Filho

Abstract:

Several dams in the world have already collapsed, causing environmental, social and economic damage. The concern to avoid future disasters has stimulated the creation of a great number of laws and rules in many countries. In Brazil, Law 12.334/2010 was created, which establishes the National Policy on Dam Safety. Overall, this policy requires the dam owners to invest in the maintenance of their structures and to improve its monitoring systems in order to provide faster and straightforward responses in the case of an increase of risks. As monitoring tools, visual inspections has provides comprehensive assessment of the structures performance, while auscultation’s instrumentation has added specific information on operational or behavioral changes, providing an alarm when a performance indicator exceeds the acceptable limits. These limits can be set using statistical methods based on the relationship between instruments measures and other variables, such as reservoir level, time of the year or others instruments measuring. Besides the design parameters (uplift of the foundation, displacements, etc.) the dam instrumentation can also be used to monitor the behavior of defects and damage manifestations. Specifically in concrete gravity dams, one of the main causes for the appearance of cracks, are the concrete volumetric changes generated by the thermal origin phenomena, which are associated with the construction process of these structures. Based on this, the goal of this research is to propose a monitoring process of the thermal cracking behavior in concrete gravity dams, through the instrumentation data analysis and the establishment of control values. Therefore, as a case study was selected the Block B-11 of José Richa Governor Dam Power Plant, that presents a cracking process, which was identified even before filling the reservoir in August’ 1998, and where crack meters and surface thermometers were installed for its monitoring. Although these instruments were installed in May 2004, the research was restricted to study the last 4.5 years (June 2010 to November 2014), when all the instruments were calibrated and producing reliable data. The adopted method is based on simple linear correlations procedures to understand the interactions among the instruments time series, verifying the response times between them. The scatter plots were drafted from the best correlations, which supported the definition of the limit control values. Among the conclusions, it is shown that there is a strong or very strong correlation between ambient temperature and the crack meters and flowmeters measurements. Based on the results of the statistical analysis, it was possible to develop a tool for monitoring the behavior of the case study cracks. Thus it was fulfilled the goal of the research to develop a proposal for a monitoring process of the behavior of thermal cracking in concrete gravity dams.

Keywords: concrete gravity dam, dams safety, instrumentation, simple linear correlation

Procedia PDF Downloads 267
306 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 202
305 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 24
304 Quantitative Wide-Field Swept-Source Optical Coherence Tomography Angiography and Visual Outcomes in Retinal Artery Occlusion

Authors: Yifan Lu, Ying Cui, Ying Zhu, Edward S. Lu, Rebecca Zeng, Rohan Bajaj, Raviv Katz, Rongrong Le, Jay C. Wang, John B. Miller

Abstract:

Purpose: Retinal artery occlusion (RAO) is an ophthalmic emergency that can lead to poor visual outcome and is associated with an increased risk of cerebral stroke and cardiovascular events. Fluorescein angiography (FA) is the traditional diagnostic tool for RAO; however, wide-field swept-source optical coherence tomography angiography (WF SS-OCTA), as a nascent imaging technology, is able to provide quick and non-invasive angiographic information with a wide field of view. In this study, we looked for associations between OCT-A vascular metrics and visual acuity in patients with prior diagnosis of RAO. Methods: Patients with diagnoses of central retinal artery occlusion (CRAO) or branched retinal artery occlusion (BRAO) were included. A 6mm x 6mm Angio and a 15mm x 15mm AngioPlex Montage OCT-A image were obtained for both eyes in each patient using the Zeiss Plex Elite 9000 WF SS-OCTA device. Each 6mm x 6mm image was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) subfields. The average measurement of the central foveal subfield, inner ring, and outer ring was calculated for each parameter. Non-perfusion area (NPA) was manually measured using 15mm x 15mm Montage images. A linear regression model was utilized to identify a correlation between the imaging metrics and visual acuity. A P-value less than 0.05 was considered to be statistically significant. Results: Twenty-five subjects were included in the study. For RAO eyes, there was a statistically significant negative correlation between vision and retinal thickness as well as superficial capillary plexus vessel density (SCP VD). A negative correlation was found between vision and deep capillary plexus vessel density (DCP VD) without statistical significance. There was a positive correlation between vision and choroidal thickness as well as choroidal volume without statistical significance. No statistically significant correlation was found between vision and the above metrics in contralateral eyes. For NPA measurements, no significant correlation was found between vision and NPA. Conclusions: This is the first study to our best knowledge to investigate the utility of WF SS-OCTA in RAO and to demonstrate correlations between various retinal vascular imaging metrics and visual outcomes. Further investigations should explore the associations between these imaging findings and cardiovascular risk as RAO patients are at elevated risk for symptomatic stroke. The results of this study provide a basis to understand the structural changes involved in visual outcomes in RAO. Furthermore, they may help guide management of RAO and prevention of cerebral stroke and cardiovascular accidents in patients with RAO.

Keywords: OCTA, swept-source OCT, retinal artery occlusion, Zeiss Plex Elite

Procedia PDF Downloads 110
303 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 42
302 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 84
301 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 224
300 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 123
299 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin

Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie

Abstract:

Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.

Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin

Procedia PDF Downloads 242
298 Methodological Approach to the Elaboration and Implementation of the Spatial-Urban Plan for the Special Purpose Area: Case-Study of Infrastructure Corridor of Highway E-80, Section Nis-Merdare, Serbia

Authors: Nebojsa Stefanovic, Sasa Milijic, Natasa Danilovic Hristic

Abstract:

Spatial plan of the special purpose area constitutes a basic tool in the planning of infrastructure corridor of a highway. The aim of the plan is to define the planning basis and provision of spatial conditions for the construction and operation of the highway, as well as for developing other infrastructure systems in the corridor. This paper presents a methodology and approach to the preparation of the Spatial Plan for the special purpose area for the infrastructure corridor of the highway E-80, Section Niš-Merdare in Serbia. The applied methodological approach is based on the combined application of the integrative and participatory method in the decision-making process on the sustainable development of the highway corridor. It was found that, for the planning and management of the infrastructure corridor, a key problem is coordination of spatial and urban planning, strategic environmental assessment and sectoral traffic planning and designing. Through the development of the plan, special attention is focused on increasing the accessibility of the local and regional surrounding, reducing the adverse impacts on the development of settlements and the economy, protection of natural resources, natural and cultural heritage, and the development of other infrastructure systems in the corridor of the highway. As a result of the applied methodology, this paper analyzes the basic features such as coverage, the concept, protected zones, service facilities and objects, the rules of development and construction, etc. Special emphasis is placed to methodology and results of the Strategic Environmental Assessment of the Spatial Plan, and to the importance of protection measures, with the special significance of air and noise protection measures. For evaluation in the Strategic Environmental Assessment, a multicriteria expert evaluation (semi-quantitative method) of planned solutions was used in relation to the set of goals and relevant indicators, based on the basic set of indicators of sustainable development. Evaluation of planned solutions encompassed the significance and size, spatial conditions and probability of the impact of planned solutions on the environment, and the defined goals of strategic assessment. The framework of the implementation of the Spatial Plan is presented, which is determined for the simultaneous elaboration of planning solutions at two levels: the strategic level of the spatial plan and detailed urban plan level. It is also analyzed the relationship of the Spatial Plan to other applicable planning documents for the planning area. The effects of this methodological approach relate to enabling integrated planning of the sustainable development of the infrastructure corridor of the highway and its surrounding area, through coordination of spatial, urban and sectoral traffic planning and design, as well as the participation of all key actors in the adoption and implementation of planned decisions. By the conclusions of the paper, it is pointed to the direction for further research, particularly in terms of harmonizing methodology of planning documentation and preparation of technical-design documentation.

Keywords: corridor, environment, highway, impact, methodology, spatial plan, urban

Procedia PDF Downloads 180
297 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 107
296 Howard Mold Count of Tomato Pulp Commercialized in the State of São Paulo, Brazil

Authors: M. B. Atui, A. M. Silva, M. A. M. Marciano, M. I. Fioravanti, V. A. Franco, L. B. Chasin, A. R. Ferreira, M. D. Nogueira

Abstract:

Fungi attack large amount of fruits and those who have suffered an injury on the surface are more susceptible to the growth, as they have pectinolytic enzymes that destroy the edible portion forming an amorphous and soft dough. The spores can reach the plant by the wind, rain and insects and fruit may have on its surface, besides the contaminants from the fruit trees, land and water, forming a flora composed mainly of yeasts and molds. Other contamination can occur for the equipment used to harvest, for the use of boxes and contaminated water to the fruit washing, for storage in dirty places. The hyphae in tomato products indicate the use of raw materials contaminated or unsuitable hygiene conditions during processing. Although fungi are inactivated in heat processing step, its hyphae remain in the final product and search for detection and quantification is an indicator of the quality of raw material. Howard Method count of fungi mycelia in industrialized pulps evaluates the amount of decayed fruits existing in raw material. The Brazilian legislation governing processed and packaged products set the limit of 40% of positive fields in tomato pulps. The aim of this study was to evaluate the quality of the tomato pulp sold in greater São Paulo, through a monitoring during the four seasons of the year. All over 2010, 110 samples have been examined; 21 were taking in spring, 31 in summer, 31 in fall and 27 in winter, all from different lots and trademarks. Samples have been picked up in several stores located in the city of São Paulo. Howard method was used, recommended by the AOAC, 19th ed, 2011 16:19:02 technique - method 965.41. Hundred percent of the samples contained fungi mycelia. The count average of fungi mycelia per season was 23%, 28%, 8,2% and 9,9% in spring, summer, fall and winter, respectively. Regarding the spring samples of the 21 samples analyzed, 14.3% were off-limits proposed by the legislation. As for the samples of the fall and winter, all were in accordance with the legislation and the average of mycelial filament count has not exceeded 20%, which can be explained by the low temperatures during this time of the year. The acquired samples in the summer and spring showed high percentage of fungal mycelium in the final product, related to the high temperatures in these seasons. Considering that the limit of 40% of positive fields is accepted for the Brazilian Legislation (RDC nº 14/2014), 3 spring samples (14%) and 6 summer samples (19%) will be over this limit and subject to law penalties. According to gathered data, 82% of manufacturers of this product manage to keep acceptable levels of fungi mycelia in their product. In conclusion, only 9.2% samples were for the limits established by Resolution RDC. 14/2014, showing that the limit of 40% is feasible and can be used by these segment industries. The result of the filament count mycelial by Howard method is an important tool in the microscopic analysis since it measures the quality of raw material used in the production of tomato products.

Keywords: fungi, howard, method, tomato, pulps

Procedia PDF Downloads 354
295 Factors Associated with Hand Functional Disability in People with Rheumatoid Arthritis: A Systematic Review and Best-Evidence Synthesis

Authors: Hisham Arab Alkabeya, A. M. Hughes, J. Adams

Abstract:

Background: People with Rheumatoid Arthritis (RA) continue to experience problems with hand function despite new drug advances and targeted medical treatment. Consequently, it is important to identify the factors that influence the impact of RA disease on hand function. This systematic review identified observational studies that reported factors that influenced the impact of RA on hand function. Methods: MEDLINE, EMBASE, CINAL, AMED, PsychINFO, and Web of Science database were searched from January 1990 up to March 2017. Full-text articles published in English that described factors related to hand functional disability in people with RA were selected following predetermined inclusion and exclusion criteria. Pertinent data were thoroughly extracted and documented using a pre-designed data extraction form by the lead author, and cross-checked by the review team for completion and accuracy. Factors related to hand function were classified under the domains of the International Classification of Functioning, Disability, and Health (ICF) framework and health-related factors. Three reviewers independently assessed the methodological quality of the included articles using the quality of cross-sectional studies (AXIS) tool. Factors related to hand function that was investigated in two or more studies were explored using a best-evidence synthesis. Results: Twenty articles form 19 studies met the inclusion criteria from 1,271 citations; all presented cross-sectional data (five high quality and 15 low quality studies), resulting in at best limited evidence in the best-evidence synthesis. For the factors classified under the ICF domains, the best-evidence synthesis indicates that there was a range of body structure and function factors that were related with hand functional disability. However, key factors were hand strength, disease activity, and pain intensity. Low functional status (physical, emotional and social) level was found to be related with limited hand function. For personal factors, there is limited evidence that gender is not related with hand function; whereas, conflicting evidence was found regarding the relationship between age and hand function. In the domain of environmental factors, there was limited evidence that work activity was not related with hand function. Regarding health-related factors, there was limited evidence that the level of the rheumatoid factor (RF) was not related to hand function. Finally, conflicting evidence was found regarding the relationship between hand function and disease duration and general health status. Conclusion: Studies focused on body structure and function factors, highlighting a lack of investigation into personal and environmental factors when considering the impact of RA on hand function. The level of evidence which exists was limited, but identified that modifiable factors such as grip or pinch strength, disease activity and pain are the most influential factors on hand function in people with RA. The review findings suggest that important personal and environmental factors that impact on hand function in people with RA are not yet considered or reported in clinical research. Well-designed longitudinal, preferably cohort, studies are now needed to better understand the causality between personal and environmental factors and hand functional disability in people with RA.

Keywords: factors, hand function, rheumatoid arthritis, systematic review

Procedia PDF Downloads 121
294 A System for Preventing Inadvertent Exposition of Staff Present outside the Operating Theater: Description and Clinical Test

Authors: Aya Al Masri, Kamel Guerchouche, Youssef Laynaoui, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: Mobile C-arms move throughout operating rooms of the operating theater. Being designed to move between rooms, they are not equipped with relays to retrieve the exposition information and export it outside the room. Therefore, no light signaling is available outside the room to warn the X-ray emission for staff. Inadvertent exposition of staff outside the operating theater is a real problem for radiation protection. The French standard NFC 15-160 require that: (1) access to any room containing an X-ray emitting device must be controlled by a light signage so that it cannot be inadvertently crossed, and (2) setting up an emergency button to stop the X-ray emission. This study presents a system that we developed to meet these requirements and the results of its clinical test. Materials and methods: The system is composed of two communicating boxes: o The "DetectBox" is to be installed inside the operating theater. It identifies the various operation states of the C-arm by analyzing its power supply signal. The DetectBox communicates (in wireless mode) with the second box (AlertBox). o The "AlertBox" can operate in socket or battery mode and is to be installed outside the operating theater. It detects and reports the state of the C-arm by emitting a real time light signal. This latter can have three different colors: red when the C-arm is emitting X-rays, orange when it is powered on but does not emit X-rays, and green when it is powered off. The two boxes communicate on a radiofrequency link exclusively carried out in the ‘Industrial, Scientific and Medical (ISM)’ frequency bands and allows the coexistence of several on-site warning systems without communication conflicts (interference). Taking into account the complexity of performing electrical works in the operating theater (for reasons of hygiene and continuity of medical care), this system (having a size <10 cm²) works in complete safety without any intrusion in the mobile C-arm and does not require specific electrical installation work. The system is equipped with emergency button that stops X-ray emission. The system has been clinically tested. Results: The clinical test of the system shows that: it detects X-rays having both high and low energy (50 – 150 kVp), high and low photon flow (0.5 – 200 mA: even when emitted for a very short time (<1 ms)), Probability of false detection < 10-5, it operates under all acquisition modes (continuous, pulsed, fluoroscopy mode, image mode, subtraction and movie mode), it is compatible with all C-arm models and brands. We have also tested the communication between the two boxes (DetectBox and AlertBox) in several conditions: (1) Unleaded room, (2) leaded room, and (3) rooms with particular configuration (sas, great distances, concrete walls, 3 mm of lead). The result of these last tests was positive. Conclusion: This system is a reliable tool to alert the staff present outside the operating room for X-ray emission and insure their radiation protection.

Keywords: Clinical test, Inadvertent staff exposition, Light signage, Operating theater

Procedia PDF Downloads 101
293 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 88
292 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?

Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria

Abstract:

Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.

Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state

Procedia PDF Downloads 261
291 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 93
290 The ‘Othered’ Body: Deafness and Disability in Nina Raine’s Tribes

Authors: Nurten Çelik

Abstract:

Under the new developments in science, medicine, sociology, psychology and literary theories, body studies has gained huge importance and the body has become a debatable issue. There has emerged, among sociologists and literary theorists, an overwhelming consensus that body is socially, politically and culturally perceived and constructed and thus, the position of an individual in the society is determined in accordance with his/her body image. In this regard, the most complicated point is the theoretical views propounded upon disability studies, where the disabled body is considered to be a site upon which social and political restrictions as well as repressions are inscribed. There has been the widely-accepted view that no matter what kind of disability it is, those with physical, mental or learning impairments face varied social, political and environmental obstacles that prevent them from being an active citizen, worker, lover and even a family member. In parallel with these approaches, the matter of the sufferings of disabled individuals attains its place in cinema and literature as well as in theatre studies under the category of disability theatre. One of the prominent plays that deal with physical disability came from the contemporary British playwright Nina Raine. In her awarded play Tribes, which premiered at the Royal Court Theatre in 2010, Raine develops the social strata where her deaf protagonist, Billy, caught up between two tribes – namely his family and his lover Slyvia, a member of the deaf community– experiences personal and social hardships due to his hearing impairment. In the play, intransigent and self-opinionated family members foster no sense of empathy towards Billy, there are noisy talking and shouting, but no communication, love, compassion or mutual understanding, and language becomes just a tool for the expression of rage and oppression. In the disordered atmosphere of the family life, Billy experiences isolation and loneliness. Billy’s hopes for success and love are destroyed when Slyvia, troubled between hearing and deafness, rejects him because she does not utterly grasp what Billy is experiencing. Drawing upon the hardships, Billy undergoes in his relationships with his family and his girlfriend, Tribes problematizes the concept of deafness and explores to what extent a deaf person can find a place in the hearing world. Setting ‘the disabled’ bodies against ‘the abled’ bodies in a family, a microcosm of the society where bodies are socially shaped and constructed, Tribes dramatizes how the disabled bodies are disenfranchised, stigmatised, marginalized and othered on the grounds that they are socially misfit. Tribes, with a specific focus on the dysfunctional family, shows that the lack of communication and empathy numbs the characters to the feelings of each other and thereby, they become more disabled than Billy. In conclusion, this paper, with the reference to the embodiment of disability and social theories, aims to explore how disabled bodies are socially marked and segregated from family and society.

Keywords: body, deafness, disability, disability theatre, Nina Raine, tribes

Procedia PDF Downloads 218
289 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis

Authors: Songdi Li, Louise Spry, Tony Woodall

Abstract:

Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.

Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program

Procedia PDF Downloads 102
288 Religious Discourses and Their Impact on Regional and Global Geopolitics: A Study of Deobandi in India, Pakistan and Afghanistan

Authors: Soumya Awasthi

Abstract:

The spread of radical ideology is possible not merely through public meetings, protests, and mosques but even in schools, seminaries, and madrasas. The rhetoric created around the relationship between religion and conflict has been the primary factor for instigating global conflicts – when religion is used to achieve broader objectives. There have been numerous cases of religion-driven conflict around the world be it the Jewish revolt between 66 AD and 628 AD or the 1119 AD the Crusades revolt or during the Cold War period or the rise of right-wing politics in India. Some of the major developments which reiterate the significance of religion in the contemporary times include: (1) The emergence of theocracy in Iran in 1979 (2) Resurgence of world-wide religious beliefs in post-Soviet space. (3) Emergence of transnational terrorism shaped by twisted depiction of Islam by the self proclaimed protectors of the religion. Therefore this paper is premised in the argument that religion has always found itself on the periphery of the discipline of International Relations (IR), and has received less attention than it deserves. The focus of the topic is on the discourses of ‘Deobandi’ and its impact both on the geopolitics of the region- particularly in India, Pakistan, and Afghanistan- and also at the global level. Discourse is a mechanism in use since time immemorial and has been a key tool to mobilise masses against the ruling authority. With the help of field surveys, qualitative and analytical method of research in religion and international relations, it has been found that they are numerous madrassas that are running illegally and are unregistered. These seminaries are operating in the Khyber-Pakhtunkhwa and Federally Administered Tribal Area (FATA). During the Soviet invasion of Afghanistan in 1979, relation between religion and geopolitics was highlighted when there was a sudden spread of radical ideas, finding support from countries like Saudi Arabia (who funded the campaign) and Pakistan (which organised the Saudi funds and set up training camps, both educational and military). During this period there was a huge influence of Wahabi theology on the madrasas which started with Deoband philosophy and later became a mix of Wahabi (influenced by Ahmad Ibn Hannabal and Ibn Taimmiya) and Deobandi philosophy, tending towards fundamentalism. Later the impact of regional geopolitics had influence on the global geopolitics when the incidents like attack on the US in 2001, bomb blasts in U.K, Indonesia, Turkey, and Israel in 2000s. In the midst of all this, there were several scholars who pointed towards Deobandi Philosophy as one of the drivers in the creation of armed Islamic groups in Pakistan, Afghanistan. Hence this paper will make an attempt to understand the trend as to how Deobandi religious discourses originating from India have changed over the decades, and who the agents of such changes are. It will throw light on Deoband from pre-independence till date to create a narrative around the religious discourses and Deobandi philosophy and its spill over impact on the map of global and regional security.

Keywords: Deobandi School of Thought, radicalization, regional and global geopolitics, religious discourses, Whabi movement

Procedia PDF Downloads 191
287 Translation and Validation of the Thai Version of the Japanese Sleep Questionnaire for Preschoolers

Authors: Natcha Lueangapapong, Chariya Chuthapisith, Lunliya Thampratankul

Abstract:

Background: There is a need to find an appropriate tool to help healthcare providers determine sleep problems in children for early diagnosis and management. The Japanese Sleep Questionnaire for Preschoolers (JSQ-P) is a parent-reported sleep questionnaire that has good psychometric properties and can be used in the context of Asian culture, which is likely suitable for Thai children. Objectives: This study aimed to translate and validate the Japanese Sleep Questionnaire for Preschoolers (JSQ-P) into a Thai version and to evaluate factors associated with sleep disorders in preschoolers. Methods: After approval by the original developer, the cross-cultural adaptation process of JSQ-P was performed, including forward translation, reconciliation, backward translation, and final approval of the Thai version of JSQ-P (TH-JSQ-P) by the original creator. This study was conducted between March 2021 and February 2022. The TH-JSQ-P was completed by 2,613 guardians whose children were aged 2-6 years twice in 10-14 days to assess its reliability and validity. Content validity was measured by an index of item-objective congruence (IOC) and a content validity index (CVI). Face validity, content validity, structural validity, construct validity (discriminant validity), criterion validity and predictive validity were assessed. The sensitivity and specificity of the TH-JSQ-P were also measured by using a total JSQ-P score cutoff point 84, recommended by the original JSQ-P and each subscale score among the clinical samples of obstructive sleep apnea syndrome. Results: Internal consistency reliability, evaluated by Cronbach’s α coefficient, showed acceptable reliability in all subscales of JSQ-P. It also had good test-retest reliability, as the intraclass correlation coefficient (ICC) for all items ranged between 0.42-0.84. The content validity was acceptable. For structural validity, our results indicated that the final factor solution for the Th-JSQ-P was comparable to the original JSQ-P. For construct validity, age group was one of the clinical parameters associated with some sleep problems. In detail, parasomnias, insomnia, daytime excessive sleepiness and sleep habits significantly decreased when the children got older; on the other hand, insufficient sleep was significantly increased with age. For criterion validity, all subscales showed a correlation with the Epworth Sleepiness Scale (r = -0.049-0.349). In predictive validity, the Epworth Sleepiness Scale was significantly a strong factor that correlated to sleep problems in all subscales of JSQ-P except in the subscale of sleep habit. The sensitivity and specificity of the total JSQ-P score were 0.72 and 0.66, respectively. Conclusion: The Thai version of JSQ-P has good internal consistency reliability and test-retest reliability. It passed 6 validity tests, and this can be used to evaluate sleep problems in preschool children in Thailand. Furthermore, it has satisfactory general psychometric properties and good reliability and validity. The data collected in examining the sensitivity of the Thai version revealed that the JSQ-P could detect differences in sleep problems among children with obstructive sleep apnea syndrome. This confirmed that the measure is sensitive and can be used to discriminate sleep problems among different children.

Keywords: preschooler, questionnaire, validation, Thai version

Procedia PDF Downloads 61
286 Prognostic Factors for Mortality and Duration of Admission in Malnourished Hospitalized, Elderly Patients: A Cross-Sectional Study

Authors: Christos E. Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Tamta Sirbilatze, Ifigenia Apostolou, Christina Kordali, Konstantina Panouria, Kostas Argyros, Georgios Mavras

Abstract:

Malnutrition in hospitalized patients is related to increased morbidity and mortality. Purpose of our study was to assess nutritional status of hospitalized, elderly patients with various nutritional scores and to detect unfavorable prognostic factors, related to increased mortality and extended duration of admission. Methods: 150 patients (78 men, 72 women, mean age 80±8.2) were included in this cross-sectional study. Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). The following data were incorporated in analysis: Anthropometric and laboratory data, physical activity (International Physical Activity Questionnaires, IPAQ), smoking status, dietary habits and mediterranean diet (assessed by MedDiet score), cause and duration of current admission, medical history (co-morbidities, previous admissions). Primary endpoints were the mortality (from admission until 6 months afterwards) and duration of admission, compared to national guidelines for closed consolidated medical expenses. Mann-Whitney two-sample statistics or t-test was used for group comparisons and Spearman or Pearson coefficients for testing correlation between variables. Results: Normal nutrition was assessed in 54/150 (36%), 92/150 (61.3%) and in 106/150 (70.7%) of patients, according to full MNA, MUST and sNAQ questionnaires respectively. Mortality rate was 20.7% (31/150 patients). The patients who died until 6 months after admission had lower BMI (24±4.4 vs 26±4.8, p=0.04) and albumin levels (2.9±0.7 vs 3.4±0.7, p=0.002), significantly lower full MNA (14.5±7.3 vs 20.7±6, p<0.0001) and short-form MNA scores (7.3±4.2 vs 10.5±3.4, p=0.0002) compared to non-dead one. In contrast, the aforementioned patients had higher MUST (2.5±1.8 vs 0.5±1.02, p=<0.0001) and sNAQ scores (2.9±2.4 vs 1.1±1.3, p<0.0001). Additionally, they showed significantly lower MedDiet (23.5±4.3 vs 31.1±5.6, p<0.0001) and IPAQ scores (37.2±156.2 vs 516.5±1241.7, p<0.0001) compared to remaining one. These patients had extended hospitalization [5 (0-13) days vs 0 (-1-3) days, p=0.001]. Patients who admitted due to cancer depicted higher mortality rate (10/13, 77%), compared to those who admitted due to infections (12/73, 18%), stroke (4/15, 27%) or other causes (4/49, 8%) (p<0.0001). Extension of hospitalization was negatively correlated to both full (Spearman r=-0.35, p<0.0001) and short-form MNA (Spearman r=-0.33, p<0.0001) and positively correlated to MUST (Spearman r=0.34, p<0.0001) and sNAQ (Spearman r=0.3, p=0.0002). Additionally, the extension was inversely related to MedDiet score (Spearman r=-0.35, p<0.0001), IPAQ score (Spearman r=-0.34, p<0.0001), albumin levels (Pearson r=-0.36, p<0.0001), Ht (Pearson r=-0.2, p=0.02) and Hb (Pearson r=-0.18, p=0.02). Conclusion: A great proportion of elderly, hospitalized patients are malnourished or at risk of malnutrition. All nutritional scores, physical activity and albumin are significantly related to mortality and increased hospitalization.

Keywords: dietary habits, duration of admission, malnutrition, prognostic factors for mortality

Procedia PDF Downloads 264
285 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane

Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu

Abstract:

In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).

Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy

Procedia PDF Downloads 200
284 Evaluation of Kabul BRT Route Network with Application of Integrated Land-use and Transportation Model

Authors: Mustafa Mutahari, Nao Sugiki, Kojiro Matsuo

Abstract:

The four decades of war, lack of job opportunities, poverty, lack of services, and natural disasters in different provinces of Afghanistan have contributed to a rapid increase in the population of Kabul, the capital city of Afghanistan. Population census has not been conducted since 1979, the first and last population census in Afghanistan. However, according to population estimations by Afghan authorities, the population of Kabul has been estimated at more than 4 million people, whereas the city was designed for two million people. Although the major transport mode of Kabul residents is public transport, responsible authorities within the country failed to supply the required means of transportation systems for the city. Besides, informal resettlement, lack of intersection control devices, presence of illegal vendors on streets, presence of illegal and unstandardized on-street parking and bus stops, driver`s unprofessional behavior, weak traffic law enforcement, and blocked roads and sidewalks have contributed to the extreme traffic congestion of Kabul. In 2018, the government of Afghanistan approved the Kabul city Urban Design Framework (KUDF), a vision towards the future of Kabul, which provides strategies and design guidance at different scales to direct urban development. Considering traffic congestion of the city and its budget limitations, the KUDF proposes a BRT route network with seven lines to reduce the traffic congestion, and it is said to facilitate more than 50% of Kabul population to benefit from this service. Based on the KUDF, it is planned to increase the BRT mode share from 0% to 17% and later to 30% in medium and long-term planning scenarios, respectively. Therefore, a detailed research study is needed to evaluate the proposed system before the implementation stage starts. The integrated land-use transport model is an effective tool to evaluate the Kabul BRT because of its future assessment capabilities that take into account the interaction between land use and transportation. This research aims to analyze and evaluate the proposed BRT route network with the application of an integrated land-use and transportation model. The research estimates the population distribution and travel behavior of Kabul within small boundary scales. The actual road network and land-use detailed data of the city are used to perform the analysis. The BRT corridors are evaluated not only considering its impacts on the spatial interactions in the city`s transportation system but also on the spatial developments. Therefore, the BRT are evaluated with the scenarios of improving the Kabul transportation system based on the distribution of land-use or spatial developments, planned development typology and population distribution of the city. The impacts of the new improved transport system on the BRT network are analyzed and the BRT network is evaluated accordingly. In addition, the research also focuses on the spatial accessibility of BRT stops, corridors, and BRT line beneficiaries, and each BRT stop and corridor are evaluated in terms of both access and geographic coverage, as well.

Keywords: accessibility, BRT, integrated land-use and transport model, travel behavior, spatial development

Procedia PDF Downloads 173
283 Development of a Human Skin Explant Model for Drug Metabolism and Toxicity Studies

Authors: K. K. Balavenkatraman, B. Bertschi, K. Bigot, A. Grevot, A. Doelemeyer, S. D. Chibout, A. Wolf, F. Pognan, N. Manevski, O. Kretz, P. Swart, K. Litherland, J. Ashton-Chess, B. Ling, R. Wettstein, D. J. Schaefer

Abstract:

Skin toxicity is poorly detected during preclinical studies, and drug-induced side effects in humans such as rashes, hyperplasia or more serious events like bullous pemphigus or toxic epidermal necrolysis represent an important hurdle for clinical development. In vitro keratinocyte-based epidermal skin models are suitable for the detection of chemical-induced irritancy, but do not recapitulate the biological complexity of full skin and fail to detect potential serious side-effects. Normal healthy skin explants may represent a valuable complementary tool, having the advantage of retaining the full skin architecture and the resident immune cell diversity. This study investigated several conditions for the maintenance of good morphological structure after several days of culture and the retention of phase II metabolism for 24 hours in skin explants in vitro. Human skin samples were collected with informed consent from patients undergoing plastic surgery and immediately transferred and processed in our laboratory by removing the underlying dermal fat. Punch biopsies of 4 mm diameter were cultured in an air-liquid interface using transwell filters. Different cultural conditions such as the effect of calcium, temperature and cultivation media were tested for a period of 14 days and explants were histologically examined after Hematoxylin and Eosin staining. Our results demonstrated that the use of Williams E Medium at 32°C maintained the physiological integrity of the skin for approximately one week. Upon prolonged incubation, the upper layers of the epidermis become thickened and some dead cells are present. Interestingly, these effects were prevented by addition of EGFR inhibitors such as Afatinib or Erlotinib. Phase II metabolism of the skin such as glucuronidation (4-methyl umbeliferone), sulfation (minoxidil), N-acetyltransferase (p-toluidene), catechol methylation (2,3-dehydroxy naphthalene), and glutathione conjugation (chlorodinitro benzene) were analyzed by using LCMS. Our results demonstrated that the human skin explants possess metabolic activity for a period of at least 24 hours for all the substrates tested. A time course for glucuronidation with 4-methyl umbeliferone was performed and a linear correlation was obtained over a period of 24 hours. Longer-term culture studies will indicate the possible evolution of such metabolic activities. In summary, these results demonstrate that human skin explants maintain a normal structure for several days in vitro and are metabolically active for at least the first 24 hours. Hence, with further characterisation, this model may be suitable for the study of drug-induced toxicity.

Keywords: human skin explant, phase II metabolism, epidermal growth factor receptor, toxicity

Procedia PDF Downloads 257
282 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents

Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin

Abstract:

The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.

Keywords: antimicrobial agent, chitosan, green production, nanoparticles

Procedia PDF Downloads 302