Search results for: comparing ex-ante between ex-post indicator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2785

Search results for: comparing ex-ante between ex-post indicator

715 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction

Authors: Khaled Barkaoui

Abstract:

Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.

Keywords: second language writing, Fluency, accuracy, complexity, longitudinal

Procedia PDF Downloads 140
714 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk

Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise

Abstract:

The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.

Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods

Procedia PDF Downloads 398
713 Application of Functionalized Magnetic Particles as Demulsifier for Oil‐in‐Water Emulsions

Authors: Hamideh Hamedi, Nima Rezaei, Sohrab Zendehboudi

Abstract:

Separating emulsified oil contaminations from waste- or produced water is of interest to various industries. Magnetic particles (MPs) application for separating dispersed and emulsified oil from wastewater is becoming more popular. Stabilization of MPs is required through developing a coating layer on their surfaces to prevent their agglomeration and enhance their dispersibility. In this research, we study the effects of coating material, size, and concentration of iron oxide MPs on oil separation efficiency, using oil adsorption capacity measurements. We functionalize both micro-and nanoparticles of Fe3O4 using sodium dodecyl sulfate (SDS) as an anionic surfactant, cetyltrimethylammonium bromide (CTAB) as a cationic surfactant, and stearic acid (SA). The chemical structures and morphologies of these particles are characterized using Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), and Energy Dispersive X-ray (EDX). The oil-water separation results indicate that a low dosage of the coated magnetic nanoparticle with CTAB (0.5 g/L MNP-CTAB) results the highest oil adsorption capacity (nearly 100%) for 1000 ppm dodecane-in-water emulsion, containing ultra-small droplets (250–300 nm). While separation efficiency of the same dosage of bare MNPs is around 57.5%. Demulsification results of magnetic microparticles (MMPs) also reveal that the functionalizing particles with CTAB increase oil removal efficiency from 86.3% for bare MMP to 92% for MMP-CTAB. Comparing the results of different coating materials implies that the major interaction reaction is an electrostatic attraction between negatively charged oil droplets and positively charged MNP-CTAB and MMP-CTAB. Furthermore, the synthesized nanoparticles could be recycled and reused; after ten cycles the oil adsorption capacity slightly decreases to near 95%. In conclusion, functionalized magnetic particles with high oil separation efficiency could be used effectively in treatment of oily wastewater. Finally, optimization of the adsorption process is required by considering the effective system variables, and fluid properties.

Keywords: oily wastewater treatment, emulsions, oil-water separation, adsorption, magnetic nanoparticles

Procedia PDF Downloads 93
712 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.

Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry

Procedia PDF Downloads 115
711 Comparison of Fuel Properties from Species of Microalgae and Selected Second-Generation Oil Feedstocks

Authors: Andrew C. Eloka Eboka, Freddie L. Inambao

Abstract:

Comparative investigation and assessment of microalgal technology as a biodiesel production option was studied alongside other second generation feedstocks. This was carried out by comparing the fuel properties of species of Chlorella vulgaris, Duneliella spp, Synechococus spp and Senedesmus spp with the feedstock of Jatropha (ex-basirika variety), Hura crepitans, rubber and Natal mahogany seed oils. The micro-algae were cultivated in an open pond using a photobioreactor (New Brunsink set-up model BF-115 Bioflo/CelliGen made in the US) with operating parameters: 14L capacity, working volume of 7.5L media, including 10% inoculum, at optical density of 3.144 @540nm and light intensity of 200 lux, for 23 and 16 days respectively. Various produced/accumulated biomasses were harvested by draining, flocculation, centrifugation, drying and then subjected to lipid extraction processes. The oils extracted from the algae and feedstocks were characterised and used to produce biodiesel fuels, by the transesterification method, using modified optimization protocol. Fuel properties of the final biodiesel products were evaluated for chemo-physical and fuel properties. Results revealed Chlorella vulgaris as the best strain for biomass cultivation, having the highest lipid productivity (5.2mgL-1h-1), the highest rate of CO2 absorption (17.85mgL-1min-1) and the average carbon sequestration in the form of CO2 was 76.6%. The highest biomass productivity was 35.1mgL-1h-1 (Chlorella), while Senedesmus had the least output (3.75mgL-1h-1, 11.73mgL-1min-1). All species had good pH value adaptation, ranging from 6.5 to 8.5. The fuel properties of the micro-algal biodiesel in comparison with Jatropha, rubber, Hura and Natal mahogany were within ASTM specification and AGO used as the control. Fuel cultivation from microalgae is feasible and will revolutionise the biodiesel industry.

Keywords: biodiesel, fuel properties, microalgae, second generation, seed oils, feedstock, photo-bioreactor, open pond

Procedia PDF Downloads 354
710 In vitro Evaluation of Prebiotic Potential of Wheat Germ

Authors: Lígia Pimentel, Miguel Pereira, Manuela Pintado

Abstract:

Wheat germ is a by-product of wheat flour refining. Despite this by-product being a source of proteins, lipids, fibres and complex carbohydrates, and consequently a valuable ingredient to be used in Food Industry, only few applications have been studied. The main goal of this study was to assess the potential prebiotic effect of natural wheat germ. The prebiotic potential was evaluated by in vitro assays with individual microbial strains (Lactobacillus paracasei L26 and Lactobacillus casei L431). A simulated model of the gastrointestinal digestion was also used including the conditions present in the mouth (artificial saliva), oesophagus–stomach (artificial gastric juice), duodenum (artificial intestinal juice) and ileum. The effect of natural wheat germ and wheat germ after digestion on the growth of lactic acid bacteria was studied by growing those microorganisms in de Man, Rogosa and Sharpe (MRS) broth (with 2% wheat germ and 1% wheat germ after digestion) and incubating at 37 ºC for 48 h with stirring. A negative control consisting of MRS broth without glucose was used and the substrate was also compared to a commercial prebiotic fructooligosaccharides (FOS). Samples were taken at 0, 3, 6, 9, 12, 24 and 48 h for bacterial cell counts (CFU/mL) and pH measurement. Results obtained showed that wheat germ has a stimulatory effect on the bacteria tested, presenting similar (or even higher) results to FOS, when comparing to the culture medium without glucose. This was demonstrated by the viable cell counts and also by the decrease on the medium pH. Both L. paracasei L26 and L. casei L431 could use these compounds as a substitute for glucose with an enhancement of growth. In conclusion, we have shown that wheat germ stimulate the growth of probiotic lactic acid bacteria. In order to understand if the composition of gut bacteria is altered and if wheat germ could be used as potential prebiotic, further studies including faecal fermentations should be carried out. Nevertheless, wheat germ seems to have potential to be a valuable compound to be used in Food Industry, mainly in the Bakery Industry.

Keywords: by-products, functional ingredients, prebiotic potential, wheat germ

Procedia PDF Downloads 477
709 Mango (Mangifera indica L.) Lyophilization Using Vacuum-Induced Freezing

Authors: Natalia A. Salazar, Erika K. Méndez, Catalina Álvarez, Carlos E. Orrego

Abstract:

Lyophilization, also called freeze-drying, is an important dehydration technique mainly used for pharmaceuticals. Food industry also uses lyophilization when it is important to retain most of the nutritional quality, taste, shape and size of dried products and to extend their shelf life. Vacuum-Induced during freezing cycle (VI) has been used in order to control ice nucleation and, consequently, to reduce the time of primary drying cycle of pharmaceuticals preserving quality properties of the final product. This procedure has not been applied in freeze drying of foods. The present work aims to investigate the effect of VI on the lyophilization drying time, final moisture content, density and reconstitutional properties of mango (Mangifera indica L.) slices (MS) and mango pulp-maltodextrin dispersions (MPM) (30% concentration of total solids). Control samples were run at each freezing rate without using induced vacuum. The lyophilization endpoint was the same for all treatments (constant difference between capacitance and Pirani vacuum gauges). From the experimental results it can be concluded that at the high freezing rate (0.4°C/min) reduced the overall process time up to 30% comparing process time required for the control and VI of the lower freeze rate (0.1°C/min) without affecting the quality characteristics of the dried product, which yields a reduction in costs and energy consumption for MS and MPM freeze drying. Controls and samples treated with VI at freezing rate of 0.4°C/min in MS showed similar results in moisture and density parameters. Furthermore, results from MPM dispersion showed favorable values when VI was applied because dried product with low moisture content and low density was obtained at shorter process time compared with the control. There were not found significant differences between reconstitutional properties (rehydration for MS and solubility for MPM) of freeze dried mango resulting from controls, and VI treatments.

Keywords: drying time, lyophilization, mango, vacuum induced freezing

Procedia PDF Downloads 400
708 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017

Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey

Abstract:

The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.

Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART

Procedia PDF Downloads 197
707 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption

Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský

Abstract:

Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.

Keywords: hazardous waste, oil sludge, remediation, thermal desorption

Procedia PDF Downloads 187
706 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview

Authors: Yasmeen Cheema, Parvinder Singh

Abstract:

The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.

Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack

Procedia PDF Downloads 215
705 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars

Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez

Abstract:

In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.

Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling

Procedia PDF Downloads 141
704 Sustainable Development of Adsorption Solar Cooling Machine

Authors: N. Allouache, W. Elgahri, A. Gahfif, M. Belmedani

Abstract:

Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are a good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs, such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber, that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space, and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.

Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 60
703 Neuropsychological Deficits in Drug-Resistant Epilepsy

Authors: Timea Harmath-Tánczos

Abstract:

Drug-resistant epilepsy (DRE) is defined as the persistence of seizures despite at least two syndrome-adapted antiseizure drugs (ASD) used at efficacious daily doses. About a third of patients with epilepsy suffer from drug resistance. Cognitive assessment has a crucial role in the diagnosis and clinical management of epilepsy. Previous studies have addressed the clinical targets and indications for measuring neuropsychological functions; best to our knowledge, no studies have examined it in a Hungarian therapy-resistant population. To fill this gap, we investigated the Hungarian diagnostic protocol between 18 and 65 years of age. This study aimed to describe and analyze neuropsychological functions in patients with drug-resistant epilepsy and identify factors associated with neuropsychology deficits. We perform a prospective case-control study comparing neuropsychological performances in 50 adult patients and 50 healthy individuals between March 2023 and July 2023. Neuropsychological functions were examined in both patients and controls using a full set of specific tests (general performance level, motor functions, attention, executive facts., verbal and visual memory, language, and visual-spatial functions). Potential risk factors for neuropsychological deficit were assessed in the patient group using a multivariate analysis. The two groups did not differ in age, sex, dominant hand and level of education. Compared with the control group, patients with drug-resistant epilepsy showed worse performance on motor functions and visuospatial memory, sustained attention, inhibition and verbal memory. Neuropsychological deficits could therefore be systematically detected in patients with drug-resistant epilepsy in order to provide neuropsychological therapy and improve quality of life. The analysis of the classical and complex indices of the special neuropsychological tasks presented in the presentation can help in the investigation of normal and disrupted memory and executive functions in the DRE.

Keywords: drug-resistant epilepsy, Hungarian diagnostic protocol, memory, executive functions, cognitive neuropsychology

Procedia PDF Downloads 60
702 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation

Authors: Sherif D. El Wakil, John Rice

Abstract:

The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.

Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity

Procedia PDF Downloads 311
701 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.

Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven

Procedia PDF Downloads 360
700 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques

Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola

Abstract:

Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.

Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2

Procedia PDF Downloads 130
699 Verification of a Simple Model for Rolling Isolation System Response

Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly

Abstract:

Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.

Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system

Procedia PDF Downloads 242
698 Analysis of Extreme Case of Urban Heat Island Effect and Correlation with Global Warming

Authors: Kartikey Gupta

Abstract:

Global warming and environmental degradation are at their peak today, with the years after 2000A.D. giving way to 15 hottest years in terms of average temperatures. In India, much of the standard temperature measuring equipment are located in ‘developed’ urban areas, hence showing us an incomplete picture in terms of the climate across many rural areas, which comprises most of the landmass. This study showcases data studied by the author since 3 years at Vatsalya’s Children’s village, in outskirts of Jaipur, Rajasthan, India; in the midst of semi-arid topography, where consistently huge temperature differences of up to 15.8 degrees Celsius from local Jaipur weather only 30 kilometers away, are stunning yet scary at the same time, encouraging analysis of where the natural climatic pattern is heading due to rapid unrestricted urbanization. Record-breaking data presented in this project enforces the need to discuss causes and recovery techniques. This research further explores how and to what extent we are causing phenomenal disturbances in the natural meteorological pattern by urban growth. Detailed data observations using a standardized ambient weather station at study site and comparing it with closest airport weather data, evaluating the patterns and differences, show striking differences in temperatures, wind patterns and even rainfall quantity, especially during high-pressure zone days. Winter-time lows dip to 8 degrees below freezing with heavy frost and ice, while only 30 kms away minimum figures barely touch single-digit temperatures. Human activity is having an unprecedented effect on climatic patterns in record-breaking trends, which is a warning of what may follow in the next 15-25 years for the next generation living in cities, and a serious exploration into possible solutions is a must.

Keywords: climate change, meteorology, urban heat island, urbanization

Procedia PDF Downloads 74
697 Lactate in Critically Ill Patients an Outcome Marker with Time

Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah

Abstract:

Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.

Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism

Procedia PDF Downloads 232
696 Screening and Optimization of Conditions for Pectinase Production by Aspergillus Flavus

Authors: Rumaisa Shahid, Saad Aziz Durrani, Shameel Pervez, Ibatsam Khokhar

Abstract:

Food waste is a prevalent issue in Pakistan, with over 40 percent of food discarded annually. Despite their decay, rotting fruits retain residual nutritional value consumed by microorganisms, notably fungi and bacteria. Fungi, preferred for their extracellular enzyme release, are gaining prominence, particularly for pectinase production. This enzyme offers several advantages, including clarifying juices by breaking down pectic compounds. In this study, three Aspergillus flavus isolates derived from decomposed fruits and manure were selected for pectinase production. The primary aim was to isolate fungi from diverse waste sources, identify the isolates and assess their capacity for pectinase production. The identification was done through morphological characteristics with the help of Light microscopy and Scanning Electron Microscopy (SEM). Pectinolytic potential was screened using pectin minimal salt agar (PMSA) medium, comparing clear zone diameters among isolates. Identification relied on morphological characteristics. Optimizing substrate (lemon and orange peel powder) concentrations, pH, temperature, and incubation period aimed to enhance pectinase yield. Spectrophotometry enabled quantitative analysis. The temperature was set at room temperature (28 ºC). The optimal conditions for Aspergillus flavus strain AF1(isolated from mango) included a pH of 5, an incubation period of 120 hours, and substrate concentrations of 3.3% for orange peels and 6.6% for lemon peels. For AF2 and AF3 (both isolated from soil), the ideal pH and incubation period were the same as AF1 i.e. pH 5 and 120 hours. However, their optimized substrate concentrations varied, with AF2 showing maximum activity at 3.3% for orange peels and 6.6% for lemon peels, while AF3 exhibited its peak activity at 6.6% for orange peels and 8.3% for lemon peels. Among the isolates, AF1 demonstrated superior performance under these conditions, comparatively.

Keywords: pectinase, lemon peel, orange peel, aspergillus flavus

Procedia PDF Downloads 59
695 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 179
694 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 312
693 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education

Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue

Abstract:

In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.

Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education

Procedia PDF Downloads 95
692 On-Farm Research on Organic Fruits Production in the Eastern Thailand

Authors: Sali Chinsathit, Haruthai Kaenla

Abstract:

Organic agriculture has become a major policy theme for agricultural development in Thailand since October 2005. Organic farming is enlisted as an important national agenda, to promote safe food and national export, and many government authorities have initiated projects and activities centered on organic farming promotion. Currently, Thailand has the market share of about 32 million US$ a year by exporting organic products of rice, vegetables, tea, fruits and a few medicinal herbs. There is high potential in organic crop production as there is the tropical environment promoting crop growth and leader farmer in organic farming. However, organic sector is relatively small (0.2%) comparing with conventional agricultural area, since there are many factors affecting farmers’ adoption and success in organic farming. The objective of this project was to get the organic production technology for at least 3 organic crops. The treatment and method were complied with Thai Organic Standard, and were mainly concerned on increase plant biodiversity and soil improvement by using organic fertilizer and bio-extract from fish, egg, plant and fruits. The bio-logical control, plant-extracts, and cultural practices were used to control insect pests and diseases of 3 crops including mangosteen (Garcinia mangostana L.), longkong (Aglaia dookoo Griff.) and banana (Musa (AA group)). The experiments were carried out at research centers of Department of Agriculture and farmers’ farms in Rayong and Chanthaburi provinces from 2009 to 2013. We found that both locations, plant biodiversity by intercropping mangosteen or longkong with banana and soil improvement with composts and bio-extract from fish could increased yield and farmers’ income by 6,835 US$/ha/year. Farmers got knowledge from these technologies to produce organic crops. The organic products were sold both in domestic and international countries. The organic production technologies were also environmental friendly and could be used as an alternative way for farmers in Thailand.

Keywords: banana, longkong, mangosteen, organic farming

Procedia PDF Downloads 349
691 The Evaluation of Superiority of Foot Local Anesthesia Method in Dairy Cows

Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage

Abstract:

Background: Nowadays, bovine limb interventions, especially any claw surgeries, raises selection of the most qualified and appropriate local anesthesia technique applicable for any superficial or deep interventions of the limbs. Currently, two local anesthesia methods of Intravenous Regional Anesthesia (IVRA), as well as Nerve Blocks, have been routine to apply. However, the lack of studies investigating the quality and duration as well as quantity and onset of full (complete) local anesthesia, is noticeable. Therefore, the aim of our study was comparing the onset and quality of both IVRA and our modified NBA at the hind limb of dairy cows. For this abstract, only the onset of full local anesthesia would be consider. Materials and Methods: For that reason, we used six healthy non pregnant non lactating Holestein Frisian cows in a cross-over study design. Those cows divided into two groups to receive IVRA and our modified four-point NBA. For IVRA, 20 ml procaine without epinephrine was injected into the vein digitalis dorsalis communis III and for our modified four-point NBA, 10-15 ml procaine without epinephrine preneurally to the nerves, superficial and deep peroneal as well as lateral and medial branches of metatarsal nerves. For pain stimulation, electrical stimulator Grass S48 was applied. Results: The results of electrical stimuli revealed the faster onset of full local anesthesia (p < 0.05) by application of our modified NBA in comparison to IVRA about 10 minutes. Conclusion and discussion: Despite of available references showing faster onset of foot local anesthesia of IVRA, our study demonstrated that our modified four point NBA not only can be well known as a standard foot local anesthesia method applicable to desensitize the hind limb of dairy cows, but also, selection of this modified validated local anesthesia method can lead to have a faster start of complete desensitization of distal hind limb that is remarkable in any bovine limb interventions under time constraint.

Keywords: IVRA, four point NBA, dairy cow, hind limb, full onset

Procedia PDF Downloads 141
690 Patients' Out-Of-Pocket Expenses-Effectiveness Analysis of Presurgical Teledermatology

Authors: Felipa De Mello-Sampayo

Abstract:

Background: The aim of this study is to undertake, from a patient perspective, an economic analysis of presurgical teledermatology, comparing it with a conventional referral system. Store-and-forward teledermatology allows surgical planning, saving both time and number of visits involving travel, thereby reducing patients’ out-of-pocket expenses, i.e., costs that patients incur when traveling to and from health providers for treatment, visits’ fees, and the opportunity cost of time spent in visits. Method: Patients’ out-of-pocket expenses-effectiveness of presurgical teledermatology were analyzed in the setting of a public hospital during two years. The mean delay in surgery was used to measure effectiveness. The teledermatology network covering the area served by the Hospital Garcia da Horta (HGO), Portugal, linked the primary care centers of 24 health districts with the hospital’s dermatology department. The patients’ opportunity cost of visits, travel costs, and visits’ fee of each presurgical modality (teledermatology and conventional referral), the cost ratio between the most and least expensive alternative, and the incremental cost-effectiveness ratio were calculated from initial primary care visit until surgical intervention. Two groups of patients: those with squamous cell carcinoma and those with basal cell carcinoma were distinguished in order to compare the effectiveness according to the dermatoses. Results: From a patient perspective, the conventional system was 2.15 times more expensive than presurgical teledermatology. Teledermatology had an incremental out-of-pocket expenses-effectiveness ratio of €1.22 per patient and per day of delay avoided. This saving was greater in patients with squamous cell carcinoma than in patients with basal cell carcinoma. Conclusion: From a patient economic perspective, teledermatology used for presurgical planning and preparation is the dominant strategy in terms of out-of-pocket expenses-effectiveness than the conventional referral system, especially for patients with severe dermatoses.

Keywords: economic analysis, out-of-pocket expenses, opportunity cost, teledermatology, waiting time

Procedia PDF Downloads 128
689 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation

Authors: Min L. Stewart, Patrick Johnston

Abstract:

Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.

Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding

Procedia PDF Downloads 98
688 Determinants of Maternal Near-Miss among Women in Public Hospital Maternity Wards in Northern Ethiopia: A Facility Based Case-Control Study

Authors: Dejene Ermias Mekango, Mussie Alemayehu, Gebremedhin Berhe Gebregergs, Araya Abrha Medhanye, Gelila Goba

Abstract:

Background: Maternal near miss (MNM) can be used as a proxy indicator of maternal mortality ratio. There is a huge gap in life time risk between Sub-Saharan Africa and developed countries. In Ethiopia, a significant number of women die each year from complications during pregnancy, childbirth and the post-partum period. Besides, a few studies have been performed on MNM, and little is known regarding determinant factors. This study aims to identify determinants of MNM among women in Tigray region, Northern Ethiopia. Methods: a case-control study in hospital found in Tigray region, Ethiopia was conducted from January 30 - March 30, 2016. The sample included 103 cases and 205 controls recruited from women seeking obstetric care at six public hospitals. Clients having a life-threatening obstetric complication including haemorrhage, hypertensive diseases of pregnancy, dystocia, infections, and anemia or clinical signs of severe anemia in women without haemorrhage were taken as cases and those with normal obstetric outcomes were considered as controls. Cases were selected based on proportional to size allocation while systematic sampling was employed for controls. Data were analyzed using SPSS version 20.0. Binary and multiple variable logistic regression (odds ratio) analyses were calculated with 95% CI. Results: The largest proportion of cases and controls was among the ages of20–29 years, accounting for37.9 %( 39) of cases and 31.7 %( 65) of controls. Roughly 90% of cases and controls were married. About two-thirds of controls and 45.6 %( 47) of cases had gestational age between 37-41 weeks. History of chronic medical conditions was reported in 55.3 %(57) of cases and 33.2%(68) of controls. Women with no formal education [AOR=3.2;95%CI:1.24, 8.12],being less than 16 years old at first pregnancy [AOR=2.5; 95%CI:1.12,5.63],induced labor[AOR=3; 95%CI:1.44, 6.17], history of Cesarean section (C-section) [AOR=4.6; 95%CI: 1.98, 7.61] or chronic medical disorder[AOR=3.5;95%CI:1.78, 6.93], and women who traveled more than 60 minutes before reaching their final place of care[AOR=2.8;95% CI: 1.19,6.35] all had higher odds of experiencing MNM. Conclusions: The Government of Ethiopia should continue its effort to address the lack of road and health facility access as well as education, which will help reduce MNM. Work should also be continued to educate women and providers about common predictors of MNM like the history of C-section, chronic illness, and teenage pregnancy. These efforts should be carried out at the facility, community, and individual levels. The targeted follow-up to women with a history of chronic disease and C-section could also be a practical way to reduce MNM.

Keywords: maternal near miss, severe obstetric hemorrhage, hypertensive disorder, c-section, Tigray, Ethiopia

Procedia PDF Downloads 206
687 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach

Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra

Abstract:

Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.

Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis

Procedia PDF Downloads 232
686 Goal-Setting in a Peer Leader HIV Prevention Intervention to Improve Preexposure Prophylaxis Access among Black Men Who Have Sex with Men

Authors: Tim J. Walsh, Lindsay E. Young, John A. Schneider

Abstract:

Background: The disproportionate rate of HIV infection among Black men who have sex with men (BMSM) in the United States suggest the importance of Preexposure Prophylaxis (PrEP) interventions for this population. As such, there is an urgent need for innovative outreach strategies that extend beyond the traditional patient-provider relationship to reach at-risk populations. Training members of the BMSM community as peer change agents (PCAs) is one such strategy. An important piece of this training is goal-setting. Goal-setting not only encourages PCAs to define the parameters of the intervention according to their lived experience, it also helps them plan courses of action. Therefore, the aims of this mixed methods study are: (1) Characterize the goals that BMSM set at the end of their PrEP training and (2) Assess the relationship between goal types and PCA engagement. Methods: Between March 2016 and July 2016, preliminary data were collected from 68 BMSM, ages 18-33, in Chicago as part of an ongoing PrEP intervention. Once enrolled, PCAs participate in a half-day training in which they learn about PrEP, practice initiating conversations about PrEP, and identify strategies for supporting at-risk peers through the PrEP adoption process. Training culminates with a goal-setting exercise, whereby participants establish a goal related to their role as a PCA. Goals were coded for features that either emerged from the data itself or existed in extant goal-setting literature. The main outcomes were (1) number of PrEP conversations PCAs self-report during booster conversations two weeks following the intervention and (2) number of peers PCAs recruit into the study that completed the PrEP workshop. Results: PCA goals (N=68) were characterized in terms of four features: Specificity, target population, personalization, and purpose defined. To date, PCAs report a collective 52 PrEP conversations. 56, 25, and 6% of PrEP conversations occurred with friends, family, and sexual partners, respectively. PCAs with specific goals had more PrEP conversations with at-risk peers compared to those with vague goals (58% vs. 42%); PCAs with personalized goals had more PrEP conversations compared to those with de-personalized goals (60% vs. 53%); and PCAs with goals that defined a purpose had more PrEP conversations compared to those who did not define a purpose (75% vs. 52%). 100% of PCAs with goals that defined a purpose recruited peers into the study compared to 45 percent of PCAs with goals that did not define a purpose. Conclusion: Our preliminary analysis demonstrates that BMSM are motivated to set and work toward a diverse set of goals to support peers in PrEP adoption. PCAs with goals involving a clearly defined purpose had more PrEP conversations and greater peer recruitment than those with goals lacking a defined purpose. This may indicate that PCAs who define their purpose at the outset of their participation will be more engaged in the study than those who do not. Goal-setting may be considered as a component of future HIV prevention interventions to advance intervention goals and as an indicator of PCAs understanding of the intervention.

Keywords: HIV prevention, MSM, peer change agent, preexposure prophylaxis

Procedia PDF Downloads 190