Search results for: combined wavelet-artificial neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7718

Search results for: combined wavelet-artificial neural network

818 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 146
817 Cicadas: A Clinician-assisted, Closed-loop Technology, Mobile App for Adolescents with Autism Spectrum Disorders

Authors: Bruno Biagianti, Angela Tseng, Kathy Wannaviroj, Allison Corlett, Megan DuBois, Kyu Lee, Suma Jacob

Abstract:

Background: ASD is characterized by pervasive Sensory Processing Abnormalities (SPA) and social cognitive deficits that persist throughout the course of the illness and have been linked to functional abnormalities in specific neural systems that underlie the perception, processing, and representation of sensory information. SPA and social cognitive deficits are associated with difficulties in interpersonal relationships, poor development of social skills, reduced social interactions and lower academic performance. Importantly, they can hamper the effects of established evidence-based psychological treatments—including PEERS (Program for the Education and Enrichment of Relationship Skills), a parent/caregiver-assisted, 16-weeks social skills intervention—which nonetheless requires a functional brain capable of assimilating and retaining information and skills. As a matter of fact, some adolescents benefit from PEERS more than others, calling for strategies to increase treatment response rates. Objective: We will present interim data on CICADAS (Care Improving Cognition for ADolescents on the Autism Spectrum)—a clinician-assisted, closed-loop technology mobile application for adolescents with ASD. Via ten mobile assessments, CICADAS captures data on sensory processing abnormalities and associated cognitive deficits. These data populate a machine learning algorithm that tailors the delivery of ten neuroplasticity-based social cognitive training (NB-SCT) exercises targeting sensory processing abnormalities. Methods: In collaboration with the Autism Spectrum and Neurodevelopmental Disorders Clinic at the University of Minnesota, we conducted a fully remote, three-arm, randomized crossover trial with adolescents with ASD to document the acceptability of CICADAS and evaluate its potential as a stand-alone treatment or as a treatment enhancer of PEERS. Twenty-four adolescents with ASD (ages 11-18) have been initially randomized to 16 weeks of PEERS + CICADAS (Arm A) vs. 16 weeks of PEERS + computer games vs. 16 weeks of CICADAS alone (Arm C). After 16 weeks, the full battery of assessments has been remotely administered. Results: We have evaluated the acceptability of CICADAS by examining adherence rates, engagement patterns, and exit survey data. We found that: 1) CICADAS is able to serve as a treatment enhancer for PEERS, inducing greater improvements in sensory processing, cognition, symptom reduction, social skills and behaviors, as well as the quality of life compared to computer games; 2) the concurrent delivery of PEERS and CICADAS induces greater improvements in study outcomes compared to CICADAS only. Conclusion: While preliminary, our results indicate that the individualized assessment and treatment approach designed in CICADAS seems effective in inducing adaptive long-term learning about social-emotional events. CICADAS-induced enhancement of processing and cognition facilitates the application of PEERS skills in the environment of adolescents with ASD, thus improving their real-world functioning.

Keywords: ASD, social skills, cognitive training, mobile app

Procedia PDF Downloads 213
816 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America

Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo

Abstract:

Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.

Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use

Procedia PDF Downloads 79
815 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 108
814 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home

Authors: Laura M. F. Bertens

Abstract:

The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.

Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace

Procedia PDF Downloads 159
813 Design of Nanoreinforced Polyacrylamide-Based Hybrid Hydrogels for Bone Tissue Engineering

Authors: Anuj Kumar, Kummara M. Rao, Sung S. Han

Abstract:

Bone tissue engineering has emerged as a potentially alternative method for localized bone defects or diseases, congenital deformation, and surgical reconstruction. The designing and the fabrication of the ideal scaffold is a great challenge, in restoring of the damaged bone tissues via cell attachment, proliferation, and differentiation under three-dimensional (3D) biological micro-/nano-environment. In this case, hydrogel system composed of high hydrophilic 3D polymeric-network that is able to mimic some of the functional physical and chemical properties of the extracellular matrix (ECM) and possibly may provide a suitable 3D micro-/nano-environment (i.e., resemblance of native bone tissues). Thus, this proposed hydrogel system is highly permeable and facilitates the transport of the nutrients and metabolites. However, the use of hydrogels in bone tissue engineering is limited because of their low mechanical properties (toughness and stiffness) that continue to posing challenges in designing and fabrication of tough and stiff hydrogels along with improved bioactive properties. For this purpose, in our lab, polyacrylamide-based hybrid hydrogels were synthesized by involving sodium alginate, cellulose nanocrystals and silica-based glass using one-step free-radical polymerization. The results showed good in vitro apatite-forming ability (biomineralization) and improved mechanical properties (under compression in the form of strength and stiffness in both wet and dry conditions), and in vitro osteoblastic (MC3T3-E1 cells) cytocompatibility. For in vitro cytocompatibility assessment, both qualitative (attachment and spreading of cells using FESEM) and quantitative (cell viability and proliferation using MTT assay) analyses were performed. The obtained hybrid hydrogels may potentially be used in bone tissue engineering applications after establishment of in vivo characterization.

Keywords: bone tissue engineering, cellulose nanocrystals, hydrogels, polyacrylamide, sodium alginate

Procedia PDF Downloads 151
812 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition

Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi

Abstract:

Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.

Keywords: porous medium, power law fluids, surface heat flux, vertical wedge

Procedia PDF Downloads 312
811 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma

Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova

Abstract:

Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.

Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 184
810 The Singapore Innovation Web and Facilitation of Knowledge Processes

Authors: Ola Jon Mork, Irina Emily Hansen

Abstract:

The European Growth Strategy Program calls for more efficient methods for knowledge creation and innovation. This study contributes with new insights into the Singapore Innovation System; more precisely how knowledge processes are facilitated. The research material is collected by visiting the different innovation locations in Singapore and depth interview with key persons. The different innovation actors web sites and brochures have been studied. Governmental reports and figures have also been studied. The findings show that facilitation of Knowledge Processes in the Singapore Innovation System has a basic structure with three processes, which is 1) Idea capturing – 2)Technology and Business Execution – 3)Idea Realization. Dedicated innovation parks work with the most promising entrepreneurs; more precisely: finding the persons with the motivation to 'change the world'. The innovation park will facilitate these entrepreneurs for 100 days, where they also will be connected to a global network of venture capital. And, the entrepreneurs will have access to mentors from these venture companies. Research institutes parks work with the development of world leading technology. To facilitate knowledge development they connect with industrial companies which are the most promising applicators of their technology. Knowledge facilitation is the main purpose, but this cooperation/testing is also serving as a platform for funding. Probably this is cooperation is also attractive for world leading companies. Dedicated innovation parks work with facilitation of innovators of new applications and perfection of products for the end- user. These parks can be specialized in special areas, like health products and life science products. Another example of this is automotive companies giving research call for these parks to develop and innovate new products and services upon their technology. Common characteristics for the knowledge facilitation in the Singapore Innovation System are a short trial period for promising actors, normally 100 days. It is also a strong focus on training of the entrepreneurs. Presentations and diffusion of knowledge is an important part of the facilitation. Funding will be available for the most successful entrepreneurs and innovators.

Keywords: knowledge processes, facilitation, innovation, Singapore innovation web

Procedia PDF Downloads 297
809 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 105
808 Use of Extended Conversation to Boost Vocabulary Knowledge and Soft Skills in English for Employment Classes

Authors: James G. Matthew, Seonmin Huh, Frank X. Bennett

Abstract:

English for Specific Purposes, ESP, aims to equip learners with necessary English language skills. Many ESP programs address language skills for job performance, including reading job related documents and oral proficiency. Within ESP is English for occupational purposes, EOP, which centers around developing communicative competence for the globalized workplace. Many ESP and EOP courses lack the content needed to assist students to progress at work, resulting in the need to create lexical compilation for different professions. It is important to teach communicative competence and soft skills for real job-related problem situations and address the complexities of the real world to help students to be successful in their professions. ESP and EOP research is therefore trying to balance both profession-specific educational contents as well as international multi-disciplinary language skills for the globalized workforce. The current study will build upon the existing discussion by developing pedagogy to assist students in their career through developing a strong practical command of relevant English vocabulary. Our research question focuses on the pedagogy two professors incorporated in their English for employment courses. The current study is a qualitative case study on the modes of teaching delivery for EOP in South Korea. Two foreign professors teaching at two different universities in South Korea volunteered for the study to explore their teaching practices. Both professors’ curriculums included the components of employment-related concept vocabulary, business presentations, CV/resume and cover letter preparation, and job interview preparation. All the pre-made recorded video lectures, live online class sessions with students, teachers’ lesson plans, teachers’ class materials, students’ assignments, and midterm and finals video conferences were collected for data analysis. The study then focused on unpacking representative patterns in their teaching methods. The professors used their strengths as native speakers to extend the class discussion from narrow and restricted conversations to giving students broader opportunities to practice authentic English conversation. The methods of teaching utilized three main steps to extend the conversation. Firstly, students were taught concept vocabulary. Secondly, the vocabulary was then combined in speaking activities where students had to solve scenarios, and the students were required to expand on the given forms of words and language expressions. Lastly, the students had conversations in English, using the language learnt. The conversations observed in both classes were those of authentic, expanded English communication and this way of expanding concept vocabulary lessons into extended conversation is one representative pedagogical approach that both professors took. Extended English conversation, therefore, is crucial for EOP education.

Keywords: concept vocabulary, english as a foreign language, english for employment, extended conversation

Procedia PDF Downloads 92
807 Multimedia Container for Autonomous Car

Authors: Janusz Bobulski, Mariusz Kubanek

Abstract:

The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.

Keywords: an autonomous car, image processing, lidar, obstacle detection

Procedia PDF Downloads 226
806 Methodological Approach to the Elaboration and Implementation of the Spatial-Urban Plan for the Special Purpose Area: Case-Study of Infrastructure Corridor of Highway E-80, Section Nis-Merdare, Serbia

Authors: Nebojsa Stefanovic, Sasa Milijic, Natasa Danilovic Hristic

Abstract:

Spatial plan of the special purpose area constitutes a basic tool in the planning of infrastructure corridor of a highway. The aim of the plan is to define the planning basis and provision of spatial conditions for the construction and operation of the highway, as well as for developing other infrastructure systems in the corridor. This paper presents a methodology and approach to the preparation of the Spatial Plan for the special purpose area for the infrastructure corridor of the highway E-80, Section Niš-Merdare in Serbia. The applied methodological approach is based on the combined application of the integrative and participatory method in the decision-making process on the sustainable development of the highway corridor. It was found that, for the planning and management of the infrastructure corridor, a key problem is coordination of spatial and urban planning, strategic environmental assessment and sectoral traffic planning and designing. Through the development of the plan, special attention is focused on increasing the accessibility of the local and regional surrounding, reducing the adverse impacts on the development of settlements and the economy, protection of natural resources, natural and cultural heritage, and the development of other infrastructure systems in the corridor of the highway. As a result of the applied methodology, this paper analyzes the basic features such as coverage, the concept, protected zones, service facilities and objects, the rules of development and construction, etc. Special emphasis is placed to methodology and results of the Strategic Environmental Assessment of the Spatial Plan, and to the importance of protection measures, with the special significance of air and noise protection measures. For evaluation in the Strategic Environmental Assessment, a multicriteria expert evaluation (semi-quantitative method) of planned solutions was used in relation to the set of goals and relevant indicators, based on the basic set of indicators of sustainable development. Evaluation of planned solutions encompassed the significance and size, spatial conditions and probability of the impact of planned solutions on the environment, and the defined goals of strategic assessment. The framework of the implementation of the Spatial Plan is presented, which is determined for the simultaneous elaboration of planning solutions at two levels: the strategic level of the spatial plan and detailed urban plan level. It is also analyzed the relationship of the Spatial Plan to other applicable planning documents for the planning area. The effects of this methodological approach relate to enabling integrated planning of the sustainable development of the infrastructure corridor of the highway and its surrounding area, through coordination of spatial, urban and sectoral traffic planning and design, as well as the participation of all key actors in the adoption and implementation of planned decisions. By the conclusions of the paper, it is pointed to the direction for further research, particularly in terms of harmonizing methodology of planning documentation and preparation of technical-design documentation.

Keywords: corridor, environment, highway, impact, methodology, spatial plan, urban

Procedia PDF Downloads 212
805 Modeling Taxane-Induced Peripheral Neuropathy Ex Vivo Using Patient-Derived Neurons

Authors: G. Cunningham, E. Cantor, X. Wu, F. Shen, G. Jiang, S. Philips, C. Bales, Y. Xiao, T. R. Cummins, J. C. Fehrenbacher, B. P. Schneider

Abstract:

Background: Taxane-induced peripheral neuropathy (TIPN) is the most devastating survivorship issue for patients receiving therapy. Dose reductions due to TIPN in the curative setting lead to inferior outcomes for African American patients, as prior research has shown that this group is more susceptible to developing severe neuropathy. The mechanistic underpinnings of TIPN, however, have not been entirely elucidated. While it would be appealing to use primary tissue to study the development of TIPN, procuring nerves from patients is not realistically feasible, as nerve biopsies are painful and may result in permanent damage. Therefore, our laboratory has investigated paclitaxel-induced neuronal morphological and molecular changes using an ex vivo model of human-induced pluripotent stem cell (iPSC)-derived neurons. Methods: iPSCs are undifferentiated and endlessly dividing cells that can be generated from a patient’s somatic cells, such as peripheral blood mononuclear cells (PBMCs). We successfully reprogrammed PBMCs into iPSCs using the Erythroid Progenitor Reprograming Kit (STEMCell Technologiesᵀᴹ); pluripotency was verified by flow cytometry analysis. iPSCs were then induced into neurons using a differentiation protocol that bypasses the neural progenitor stage and uses selected small-molecule modulators of key signaling pathways (SMAD, Notch, FGFR1 inhibition, and Wnt activation). Results: Flow cytometry analysis revealed expression of core pluripotency transcription factors Nanog, Oct3/4 and Sox2 in iPSCs overlaps with commercially purchased pluripotent cell line UCSD064i-20-2. Trilineage differentiation of iPSCs was confirmed with immunofluorescent imaging with germ-layer-specific markers; Sox17 and ExoA2 for ectoderm, Nestin, and Pax6 for mesoderm, and Ncam and Brachyury for endoderm. Sensory neuron markers, β-III tubulin, and Peripherin were applied to stain the cells for the maturity of iPSC-derived neurons. Patch-clamp electrophysiology and calcitonin gene-related peptide (CGRP) release data supported the functionality of the induced neurons and provided insight into the timing for which downstream assays could be performed (week 4 post-induction). We have also performed a cell viability assay and fluorescence-activated cell sorting (FACS) using four cell-surface markers (CD184, CD44, CD15, and CD24) to select a neuronal population. At least 70% of the cells were viable in the isolated neuron population. Conclusion: We have found that these iPSC-derived neurons recapitulate mature neuronal phenotypes and demonstrate functionality. Thus, this represents a patient-derived ex vivo neuronal model to investigate the molecular mechanisms of clinical TIPN.

Keywords: chemotherapy, iPSC-derived neurons, peripheral neuropathy, taxane, paclitaxel

Procedia PDF Downloads 122
804 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits

Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia

Abstract:

The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.

Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics

Procedia PDF Downloads 170
803 The City Narrated from the Hill, Evaluation of Natural Fabric in Urban Plans: A Case Study of Santiago de Chile

Authors: Monica Sanchez

Abstract:

What responsibility does urban planning have on climate changes? How does the territory give us answers of resilience? Historically, urban plans have civilized territories: waters are channeled, grounds are sealed, foreign species are incorporated, native ones are extinguished, and/or enclosed spaces are heated or cooled. Socially this facilitates coexistence, but in turn brings negative environmental consequences. The past fifty years, mankind has tried to redirect these consequences through different strategies. Research studies produced strategies designed to alleviate climate change. Exploring the nature of territories has been incorporated in urban planning to discover natures response. The case to be studied is Santiago, Chile: for its combined impacts of climate change and the significant response by this city on climate governance in the last decades. Warmer areas in Santiago are seen in the areas of high-density buildings such as the commune of Recoleta, while the coldest are characterized by the predominance of low residential densities as the commune of Providencia. These two communes are separated and complemented by an undulating body that comes from the Andes mountains called San Cristobal Hill. What if the hill were taken into account when making roads, zoning and buildings? Was it difficult to prolong in the urban plans the hill characteristics to the city solving the intersection with other natural areas? Apparently it was, because the projected-profile informs us that the planned strategies used correspond to the same operations used in the flat areas of Santiago. This research focuses on: explaining the geographic relationships between city-hill; explaining the planning process around the hill with a morphological analysis; evaluating how the hill has been considered the in the city in the plans that intended to cushion the environmental impacts and studying what is missing on the hill and city to strengthen their integration. Therefore, the research will have different scales of understanding: addressing territorial scale -understanding the vegetation, topography and hydrology; a city scale -analyzing urban plans that Santiago has dealt with the environment and city; and a local scale -studying the integration and public spaces and coverage- norms of the adjacent communes. The expected outcome is to decipher possible deficits and capabilities of the current urban plans for climate change. It is anticipated that the hill and valley is now trying to reconcile after such a long separation. Yet it seems that never will prevail all the Rules of Nature, but the Urban Rules. The plans will require pruning, irrigation, control of invasive alien species and public safety standards, but will be rejoining a dose of nature with the building environment -this will protect us better from it from the time that we feared from it and knew little about it. Today we know a little more, enough to adapt to the process. Although nature is not perceived and we ignore it, it has a remarkable ability to respond.

Keywords: resilience, climate change, urban plans, land use, hills and cities, heat islands, morphology

Procedia PDF Downloads 367
802 Multi-Criterial Analysis: Potential Regions and Height of Wind Turbines, Rio de Janeiro, Brazil

Authors: Claudio L. M. Souza, Milton Erthal, Aldo Shimoya, Elias R. Goncalves, Igor C. Rangel, Allysson R. T. Tavares, Elias G. Figueira

Abstract:

The process of choosing a region for the implementation of wind farms involves factors such as the wind regime, economic viability, land value, topography, and accessibility. This work presents results obtained by multi-criteria decision analysis, and it establishes a hierarchy, regarding the installation of wind farms, among geopolicy regions in the state of ‘Rio de Janeiro’, Brazil: ‘Regiao Norte-RN’, ‘Regiao dos Lagos-RL’ and ‘Regiao Serrana-RS’. The wind regime map indicates only these three possible regions with an average annual wind speed of above of 6.0 m/s. The method applied was the Analytical Hierarchy Process-AHP, designed to prioritize and rank the three regions based on four criteria as follows: 1) potential of the site and average wind speeds of above 6.0 ms-¹, 2) average land value, 3) distribution and interconnection to electric network with the highest number of electricity stations, and 4) accessibility with proximity and quality of highways and flat topography. The values of energy generation were calculated for wind turbines 50, 75, and 100 meters high, considering the production of site (GWh/Km²) and annual production (GWh). The weight of each criterion was attributed by six engineers and by analysis of Road Map, the Map of the Electric System, the Map of Wind Regime and the Annual Land Value Report. The results indicated that in 'RS', the demand was estimated at 2,000 GWh, so a wind farm can operate efficiently in 50 m turbines. This region is mainly mountainous with difficult access and lower land value. With respect to ‘RL’, the wind turbines have to be installed at a height of 75 m high to reach a demand of 6,300 GWh. This region is very flat, with easy access, and low land value. Finally, the ‘NR’ was evaluated as very flat and with expensive lands. In this case, wind turbines with 100 m can reach an annual production of 19,000 GWh. In this Region, the coast area was classified as of greater logistic, productivity and economic potential.

Keywords: AHP, renewable energy, wind energy

Procedia PDF Downloads 151
801 Verification of a Simple Model for Rolling Isolation System Response

Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly

Abstract:

Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.

Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system

Procedia PDF Downloads 250
800 Effect of Different Contaminants on Mineral Insulating Oil Characteristics

Authors: H. M. Wilhelm, P. O. Fernandes, L. P. Dill, C. Steffens, K. G. Moscon, S. M. Peres, V. Bender, T. Marchesan, J. B. Ferreira Neto

Abstract:

Deterioration of insulating oil is a natural process that occurs during transformers operation. However, this process can be accelerated by some factors, such as oxygen, high temperatures, metals and, moisture, which rapidly reduce oil insulating capacity and favor transformer faults. Parts of building materials of a transformer can be degraded and yield soluble compounds and insoluble particles that shorten the equipment life. Physicochemical tests, dissolved gas analysis (including propane, propylene and, butane), volatile and furanic compounds determination, besides quantitative and morphological analyses of particulate are proposed in this study in order to correlate transformers building materials degradation with insulating oil characteristics. The present investigation involves tests of medium temperature overheating simulation by means of an electric resistance wrapped with the following materials immersed in mineral insulating oil: test I) copper, tin, lead and, paper (heated at 350-400 °C for 8 h); test II) only copper (at 250 °C for 11 h); and test III) only paper (at 250 °C for 8 h and at 350 °C for 8 h). A different experiment is the simulation of electric arc involving copper, using an electric welding machine at two distinct energy sets (low and high). Analysis results showed that dielectric loss was higher in the sample of test I, higher neutralization index and higher values of hydrogen and hydrocarbons, including propane and butane, were also observed. Test III oil presented higher particle count, in addition, ferrographic analysis revealed contamination with fibers and carbonized paper. However, these particles had little influence on the oil physicochemical parameters (dielectric loss and neutralization index) and on the gas production, which was very low. Test II oil showed high levels of methane, ethane, and propylene, indicating the effect of metal on oil degradation. CO2 and CO gases were formed in the highest concentration in test III, as expected. Regarding volatile compounds, in test I acetone, benzene and toluene were detected, which are oil oxidation products. Regarding test III, methanol was identified due to cellulose degradation, as expected. Electric arc simulation test showed the highest oil oxidation in presence of copper and at high temperature, since these samples had huge concentration of hydrogen, ethylene, and acetylene. Particle count was also very high, showing the highest release of copper in such conditions. When comparing high and low energy, the first presented more hydrogen, ethylene, and acetylene. This sample had more similar results to test I, pointing out that the generation of different particles can be the cause for faults such as electric arc. Ferrography showed more evident copper and exfoliation particles than in other samples. Therefore, in this study, by using different combined analytical techniques, it was possible to correlate insulating oil characteristics with possible contaminants, which can lead to transformers failure.

Keywords: Ferrography, gas analysis, insulating mineral oil, particle contamination, transformer failures

Procedia PDF Downloads 225
799 LHCII Proteins Phosphorylation Changes Involved in the Dark-Chilling Response in Plant Species with Different Chilling Tolerance

Authors: Malgorzata Krysiak, Anna Wegrzyn, Maciej Garstka, Radoslaw Mazur

Abstract:

Under constantly fluctuating environmental conditions, the thylakoid membrane protein network evolved the ability to dynamically respond to changing biotic and abiotic factors. One of the most important protective mechanism is rearrangement of the chlorophyll-protein (CP) complexes, induced by protein phosphorylation. In a temperate climate, low temperature is one of the abiotic stresses that heavily affect plant growth and productivity. The aim of this study was to determine the role of LHCII antenna complex phosphorylation in the dark-chilling response. The study included an experimental model based on dark-chilling at 4 °C of detached chilling sensitive (CS) runner bean (Phaseolus coccineus L.) and chilling tolerant (CT) garden pea (Pisum sativum L.) leaves. This model is well described in the literature as used for the analysis of chilling impact without any additional effects caused by light. We examined changes in thylakoid membrane protein phosphorylation, interactions between phosphorylated LHCII (P-LHCII) and CP complexes, and their impact on the dynamics of photosystem II (PSII) under dark-chilling conditions. Our results showed that the dark-chilling treatment of CS bean leaves induced a substantial increase of phosphorylation of LHCII proteins, as well as changes in CP complexes composition and their interaction with P-LHCII. The PSII photochemical efficiency measurements showed that in bean, PSII is overloaded with light energy, which is not compensated by CP complexes rearrangements. On the contrary, no significant changes in PSII photochemical efficiency, phosphorylation pattern and CP complexes interactions were observed in CT pea. In conclusion, our results indicate that different responses of the LHCII phosphorylation to chilling stress take place in CT and CS plants, and that kinetics of LHCII phosphorylation and interactions of P-LHCII with photosynthetic complexes may be crucial to chilling stress response. Acknowledgments: presented work was financed by the National Science Centre, Poland grant No.: 2016/23/D/NZ3/01276

Keywords: LHCII, phosphorylation, chilling stress, pea, runner bean

Procedia PDF Downloads 140
798 Polymer Impregnated Sulfonated Carbon Composite as a Solid Acid Catalyst for the Dehydration of Xylose to Furfural

Authors: Praveen K. Khatri, Neha Karanwal, Savita Kaul, Suman L. Jain

Abstract:

Conversion of biomass through green chemical routes is of great industrial importance as biomass is considered to be most widely available inexpensive renewable resource that can be used as a raw material for the production of bio fuel and value-added organic products. In this regard, acid catalyzed dehydration of biomass derived pentose sugar (mainly D-xylose) to furfural is a process of tremendous research interest in current scenario due to the wider industrial applications of furfural. Furfural is an excellent organic solvent for refinement of lubricants and separation of butadiene from butene mixture in synthetic rubber fabrication. In addition it also serve as a promising solvent for many organic materials, such as resins, polymers and also used as a building block for synthesis of various valuable chemicals such as furfuryl alcohol, furan, pharmaceutical, agrochemicals and THF. Here in a sulfonated polymer impregnated carbon composite solid acid catalyst (P-C-SO3H) was prepared by the pyrolysis of a polymer matrix impregnated with glucose followed by its sulfonation and used for the dehydration of xylose to furfural. The developed catalyst exhibited excellent activity and provided almost quantitative conversion of xylose with the selective synthesis of furfural. The higher catalytic activity of P-C-SO3H may be due to the more even distribution of polycyclic aromatic hydrocarbons generated from incomplete carbonization of glucose along the polymer matrix network, leading to more available sites for sulfonation which resulted in greater sulfonic acid density in P-C-SO3H as compared to sulfonated carbon catalyst (C-SO3H). In conclusion, we have demonstrated sulfonated polymer impregnated carbon composite (P-C-SO3H) as an efficient and selective solid acid catalyst for the dehydration of xylose to furfural. After completion of the reaction, the catalyst was easily recovered and reused for several runs without noticeable loss in its activity and selectivity.

Keywords: Solid acid , Biomass conversion, Xylose Dehydration, Heterogeneous catalyst

Procedia PDF Downloads 409
797 Integrated Mass Rapid Transit System for Smart City Project in Western India

Authors: Debasis Sarkar, Jatan Talati

Abstract:

This paper is an attempt to develop an Integrated Mass Rapid Transit System (MRTS) for a smart city project in Western India. Integrated transportation is one of the enablers of smart transportation for providing a seamless intercity as well as regional level transportation experience. The success of a smart city project at the city level for transportation is providing proper integration to different mass rapid transit modes by way of integrating information, physical, network of routes fares, etc. The methodology adopted for this study was primary data research through questionnaire survey. The respondents of the questionnaire survey have responded on the issues about their perceptions on the ways and means to improve public transport services in urban cities. The respondents were also required to identify the factors and attributes which might motivate more people to shift towards the public mode. Also, the respondents were questioned about the factors which they feel might restrain the integration of various modes of MRTS. Furthermore, this study also focuses on developing a utility equation for respondents with the help of multiple linear regression analysis and its probability to shift to public transport for certain factors listed in the questionnaire. It has been observed that for shifting to public transport, the most important factors that need to be considered were travel time saving and comfort rating. Also, an Integrated MRTS can be obtained by combining metro rail with BRTS, metro rail with monorail, monorail with BRTS and metro rail with Indian railways. Providing a common smart card to transport users for accessing all the different available modes would be a pragmatic solution towards integration of the available modes of MRTS.

Keywords: mass rapid transit systems, smart city, metro rail, bus rapid transit system, multiple linear regression, smart card, automated fare collection system

Procedia PDF Downloads 271
796 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company

Authors: Rahma Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS

Procedia PDF Downloads 207
795 Inkjet Printed Silver Nanowire Network as Semi-Transparent Electrode for Organic Photovoltaic Devices

Authors: Donia Fredj, Marie Parmentier, Florence Archet, Olivier Margeat, Sadok Ben Dkhil, Jorg Ackerman

Abstract:

Transparent conductive electrodes (TCEs) or transparent electrodes (TEs) are a crucial part of many electronic and optoelectronic devices such as touch panels, liquid crystal displays (LCDs), organic light-emitting diodes (OLEDs), solar cells, and transparent heaters. The indium tin oxide (ITO) electrode is the most widely utilized transparent electrode due to its excellent optoelectrical properties. However, the drawbacks of ITO, such as the high cost of this material, scarcity of indium, and the fragile nature, limit the application in large-scale flexible electronic devices. Importantly, flexibility is becoming more and more attractive since flexible electrodes have the potential to open new applications which require transparent electrodes to be flexible, cheap, and compatible with large-scale manufacturing methods. So far, several materials as alternatives to ITO have been developed, including metal nanowires, conjugated polymers, carbon nanotubes, graphene, etc., which have been extensively investigated for use as flexible and low-cost electrodes. Among them, silver nanowires (AgNW) are one of the promising alternatives to ITO thanks to their excellent properties, high electrical conductivity as well as desirable light transmittance. In recent years, inkjet printing became a promising technique for large-scale printed flexible and stretchable electronics. However, inkjet printing of AgNWs still presents many challenges. In this study, a synthesis of stable AgNW that could compete with ITO was developed. This material was printed by inkjet technology directly on a flexible substrate. Additionally, we analyzed the surface microstructure, optical and electrical properties of the printed AgNW layers. Our further research focused on the study of all inkjet-printed organic modules with high efficiency.

Keywords: transparent electrodes, silver nanowires, inkjet printing, formulation of stable inks

Procedia PDF Downloads 222
794 A Novel Nanocomposite Membrane Designed for the Treatment of Oil/Gas Produced Water

Authors: Zhaoyang Liu, Detao Qin, Darren Delai Sun

Abstract:

The onshore production of oil and gas (for example, shale gas) generates large quantities of wastewater, referred to be ‘produced water’, which contains high contents of oils and salts. The direct discharge of produced water, if not appropriately treated, can be toxic to the environment and human health. Membrane filtration has been deemed as an environmental-friendly and cost-effective technology for treating oily wastewater. However, conventional polymeric membranes have their drawbacks of either low salt rejection rate or high membrane fouling tendency when treating oily wastewater. Recent years, forward osmosis (FO) membrane filtration has emerged as a promising technology with its unique advantages of low operation pressure and less membrane fouling tendency. However, until now there is still no report about FO membranes specially designed and fabricated for treating the oily and salty produced water. In this study, a novel nanocomposite FO membrane was developed specially for treating oil- and salt-polluted produced water. By leveraging the recent advance of nanomaterials and nanotechnology, this nanocomposite FO membrane was designed to be made of double layers: an underwater oleophobic selective layer on top of a nanomaterial infused polymeric support layer. Wherein, graphene oxide (GO) nanosheets were selected to add into the polymeric support layer because adding GO nanosheets can optimize the pore structures of the support layer, thus potentially leading to high water flux for FO membranes. In addition, polyvinyl alcohol (PVA) hydrogel was selected as the selective layer because hydrated and chemically-crosslinked PVA hydrogel is capable of simultaneously rejecting oil and salt. After nanocomposite FO membranes were fabricated, the membrane structures were systematically characterized with the instruments of TEM, FESEM, XRD, ATR-FTIR, surface zeta-potential and Contact angles (CA). The membrane performances for treating produced waters were tested with the instruments of TOC, COD and Ion chromatography. The working mechanism of this new membrane was also analyzed. Very promising experimental results have been obtained. The incorporation of GO nanosheets can reduce internal concentration polarization (ICP) effect in the polymeric support layer. The structural parameter (S value) of the new FO membrane is reduced by 23% from 265 ± 31 μm to 205 ± 23 μm. The membrane tortuosity (τ value) is decreased by 20% from 2.55 ± 0.19 to 2.02 ± 0.13 μm, which contributes to the decrease of S value. Moreover, the highly-hydrophilic and chemically-cross-linked hydrogel selective layer present high antifouling property under saline oil/water emulsions. Compared with commercial FO membrane, this new FO membrane possesses three times higher water flux, higher removal efficiencies for oil (>99.9%) and salts (>99.7% for multivalent ions), and significantly lower membrane fouling tendency (<10%). To our knowledge, this is the first report of a nanocomposite FO membrane with the combined merits of high salt rejection, high oil repellency and high water flux for treating onshore oil/gas produced waters. Due to its outstanding performance and ease of fabrication, this novel nanocomposite FO membrane possesses great application potential in wastewater treatment industry.

Keywords: nanocomposite, membrane, polymer, graphene oxide

Procedia PDF Downloads 249
793 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 94
792 Blockchain Is Facilitating Intercultural Entrepreneurship: Memoir of a Persian Non-Fungible Tokens Collection

Authors: Mohammad Afkhami, Saeid Reza Ameli Ranani

Abstract:

Since the bitcoin invention in 2008, blockchain technology surpassed so many innovations that the pioneer networks such as Ethereum are adaptable to host a decentral bunch of information containing pictures, audio, video, domains, etc., or even a metaverse versatile avatar. Transformation of tangible goods into virtual assets, known as AR-utility of luxury products, and the intermixture of reality and virtuality organized a worldwide, semi-regulated, and decentralized marketplace for digital goods. Non-fungible tokens (NFTs) are doing a great help to artists worldwide, sharing diverse cultural outlooks by setting up a remote cross-cultural corporation potential and, at the same time, metamorphosizing the middleman role and ceasing the necessity of having a SWIFT-connected bank account. Under critical sanctions, a group of artists in Tehran did not take for granted such an opportunity to show off their artworks undisturbed, offering an introspective attitude, exerting Iranian motifs while intermingling westernized symbols. The cryptocurrency market has already acquired allocation, and interest in the global domain, paving the way for a flourishing enthusiasm among entrepreneurs who have been preoccupied with high-tech start-ups before. In a project found by Iranian female artists, we decipher the ups and downs of the new cyberculture and the environment it provides to fairly promote the artwork and obstacles it put forward in the way of interested entrepreneurs as we get through the details of starting up an NFT collection. An in-depth interview and empirical encounters with diverse Social Network Sites (SNS) and the strategies that other successful projects deploy to sell their artworks in an international and, at the same time, an anonymous market is the main focus, which shapes the paper fieldwork perspective. In conclusion, we discuss strategies for promoting an NFT project.

Keywords: NFT, metaverse, intercultural, art, illustration, start-up, entrepreneurship

Procedia PDF Downloads 101
791 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 172
790 Patients' Out-Of-Pocket Expenses-Effectiveness Analysis of Presurgical Teledermatology

Authors: Felipa De Mello-Sampayo

Abstract:

Background: The aim of this study is to undertake, from a patient perspective, an economic analysis of presurgical teledermatology, comparing it with a conventional referral system. Store-and-forward teledermatology allows surgical planning, saving both time and number of visits involving travel, thereby reducing patients’ out-of-pocket expenses, i.e., costs that patients incur when traveling to and from health providers for treatment, visits’ fees, and the opportunity cost of time spent in visits. Method: Patients’ out-of-pocket expenses-effectiveness of presurgical teledermatology were analyzed in the setting of a public hospital during two years. The mean delay in surgery was used to measure effectiveness. The teledermatology network covering the area served by the Hospital Garcia da Horta (HGO), Portugal, linked the primary care centers of 24 health districts with the hospital’s dermatology department. The patients’ opportunity cost of visits, travel costs, and visits’ fee of each presurgical modality (teledermatology and conventional referral), the cost ratio between the most and least expensive alternative, and the incremental cost-effectiveness ratio were calculated from initial primary care visit until surgical intervention. Two groups of patients: those with squamous cell carcinoma and those with basal cell carcinoma were distinguished in order to compare the effectiveness according to the dermatoses. Results: From a patient perspective, the conventional system was 2.15 times more expensive than presurgical teledermatology. Teledermatology had an incremental out-of-pocket expenses-effectiveness ratio of €1.22 per patient and per day of delay avoided. This saving was greater in patients with squamous cell carcinoma than in patients with basal cell carcinoma. Conclusion: From a patient economic perspective, teledermatology used for presurgical planning and preparation is the dominant strategy in terms of out-of-pocket expenses-effectiveness than the conventional referral system, especially for patients with severe dermatoses.

Keywords: economic analysis, out-of-pocket expenses, opportunity cost, teledermatology, waiting time

Procedia PDF Downloads 140
789 Research on Evaluation of Renewable Energy Technology Innovation Strategy Based on PMC Index Model

Authors: Xue Wang, Liwei Fan

Abstract:

Renewable energy technology innovation is an important way to realize the energy transformation. Our government has issued a series of policies to guide and support the development of renewable energy. The implementation of these policies will affect the further development, utilization and technological innovation of renewable energy. In this context, it is of great significance to systematically sort out and evaluate the renewable energy technology innovation policy for improving the existing policy system. Taking the 190 renewable energy technology innovation policies issued during 2005-2021 as a sample, from the perspectives of policy issuing departments and policy keywords, it uses text mining and content analysis methods to analyze the current situation of the policies and conduct a semantic network analysis to identify the core issuing departments and core policy topic words; A PMC (Policy Modeling Consistency) index model is built to quantitatively evaluate the selected policies, analyze the overall pros and cons of the policy through its PMC index, and reflect the PMC value of the model's secondary index The core departments publish policies and the performance of each dimension of the policies related to the core topic headings. The research results show that Renewable energy technology innovation policies focus on synergy between multiple departments, while the distribution of the issuers is uneven in terms of promulgation time; policies related to different topics have their own emphasis in terms of policy types, fields, functions, and support measures, but It still needs to be improved, such as the lack of policy forecasting and supervision functions, the lack of attention to product promotion, and the relatively single support measures. Finally, this research puts forward policy optimization suggestions in terms of promoting joint policy release, strengthening policy coherence and timeliness, enhancing the comprehensiveness of policy functions, and enriching incentive measures for renewable energy technology innovation.

Keywords: renewable energy technology innovation, content analysis, policy evaluation, PMC index model

Procedia PDF Downloads 64