Search results for: continuous mode changing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6285

Search results for: continuous mode changing

375 Microbial Fuel Cells: Performance and Applications

Authors: Andrea Pietrelli, Vincenzo Ferrara, Bruno Allard, Francois Buret, Irene Bavasso, Nicola Lovecchio, Francesca Costantini, Firas Khaled

Abstract:

This paper aims to show some applications of microbial fuel cells (MFCs), an energy harvesting technique, as clean power source to supply low power device for application like wireless sensor network (WSN) for environmental monitoring. Furthermore, MFC can be used directly as biosensor to analyse parameters like pH and temperature or arranged in form of cluster devices in order to use as small power plant. An MFC is a bioreactor that converts energy stored in chemical bonds of organic matter into electrical energy, through a series of reactions catalysed by microorganisms. We have developed a lab-scale terrestrial microbial fuel cell (TMFC), based on soil that acts as source of bacteria and flow of nutrient and a lab-scale waste water microbial fuel cell (WWMFC), where waste water acts as flow of nutrient and bacteria. We performed large series of tests to exploit the capability as biosensor. The pH value has strong influence on the open circuit voltage (OCV) delivered from TMFCs. We analyzed three condition: test A and B were filled with same soil but changing pH from 6 to 6.63, test C was prepared using a different soil with a pH value of 6.3. Experimental results clearly show how with higher pH value a higher OCV was produced; indeed reactors are influenced by different values of pH which increases the voltage in case of a higher pH value until the best pH value of 7 is achieved. The influence of pH on OCV of lab-scales WWMFC was analyzed at pH value of 6.5, 7, 7.2, 7.5 and 8. WWMFCs are influenced from temperature more than TMFCs. We tested the power performance of WWMFCs considering four imposed values of ambient temperature. Results show how power performance increase proportionally with higher temperature values, doubling the output power from 20° to 40°. The best value of power produced from our lab-scale TMFC was equal to 310 μW using peaty soil, at 1KΩ, corresponding to a current of 0.5 mA. A TMFC can supply proper energy to low power devices of a WSN by means of the design of three stages scheme of an energy management system, which adapts voltage level of TMFC to those required by a WSN node, as 3.3V. Using a commercial DC/DC boost converter, that needs an input voltage of 700 mV, the current source of 0.5 mA, charges a capacitor of 6.8 mF until it will have accumulated an amount of charge equal to 700 mV in a time of 10 s. The output stage includes an output switch that close the circuit after a time of 10s + 1.5ms because the converter can boost the voltage from 0.7V to 3.3V in 1.5 ms. Furthermore, we tested in form of clusters connected in series up to 20 WWMFCs, we have obtained a high voltage value as output, around 10V, but low current value. MFC can be considered a suitable clean energy source to be used to supply low power devices as a WSN node or to be used directly as biosensor.

Keywords: energy harvesting, low power electronics, microbial fuel cell, terrestrial microbial fuel cell, waste-water microbial fuel cell, wireless sensor network

Procedia PDF Downloads 207
374 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 53
373 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 188
372 Achieving Sustainable Development through Transformative Pedagogies in Universities

Authors: Eugene Allevato

Abstract:

Developing a responsible personal worldview is central to sustainable development, but achieving quality education to promote transformative learning for sustainability is thus far, poorly understood. Most programs involving education for sustainable development rely on changing behavior, rather than attitudes. The emphasis is on the scientific and utilitarian aspect of sustainability with negligible importance on the intrinsic value of nature. Campus sustainability projects include building sustainable gardens and implementing energy-efficient upgrades, instead of focusing on educating for sustainable development through exploration of students’ values and beliefs. Even though green technology adoption maybe the right thing to do, most schools are not targeting the root cause of the environmental crisis; they are just providing palliative measures. This study explores the under-examined factors that lead to pro-environmental behavior by investigating the environmental perceptions of both college business students and personnel of green organizations. A mixed research approach of qualitative, based on structured interviews, and quantitative instruments was developed including 30 college-level students’ interviews and 40 green organization staff members involved in sustainable activities. The interviews were tape-recorded and transcribed for analysis. Categorization of the responses to the open‐ended questions was conducted with the purpose of identifying the main types of factors influencing attitudes and correlating with behaviors. Overall the findings of this study indicated a lack of appreciation for nature, and inability to understand interconnectedness and apply critical thinking. The results of the survey conducted on undergraduate students indicated that the responses of business and liberal arts students by independent t-test were significantly different, with a p‐value of 0.03. While liberal arts students showed an understanding of human interdependence with nature and its delicate balance, business students seemed to believe that humans were meant to rule over the rest of nature. This result was quite intriguing from the perspective that business students will be defining markets, influencing society, controlling and managing businesses that supposedly, in the face of climate change, shall implement sustainable activities. These alarming results led to the focus on green businesses in order to better understand their motivation to engage in sustainable activities. Additionally, a probit model revealed that childhood exposure to nature has a significantly positive impact in pro-environmental attitudes to most of the New Ecological Paradigm scales. Based on these findings, this paper discusses educators including Socrates, John Dewey and Paulo Freire in the implementation of eco-pedagogy and transformative learning following a curriculum with emphasis on critical and systems thinking, which are deemed to be key ingredients in quality education for sustainable development.

Keywords: eco-pedagogy, environmental behavior, quality education for sustainable development, transformative learning

Procedia PDF Downloads 312
371 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 113
370 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion

Authors: Ali Kadir, O. Anwar Beg

Abstract:

Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.

Keywords: thermal coating, corrosion, ANSYS FEA, CFD

Procedia PDF Downloads 135
369 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 42
368 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 211
367 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity

Authors: Panagiotis Roupas, Yota Passia

Abstract:

This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.

Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual

Procedia PDF Downloads 127
366 Impact of Blended Learning in Interior Architecture Programs in Academia: A Case Study of Arcora Garage Academy from Turkey

Authors: Arzu Firlarer, Duygu Gocmen, Gokhan Uysal

Abstract:

There is currently a growing trend among universities towards blended learning. Blended learning is becoming increasingly important in higher education, with the aims of better accomplishing course learning objectives, meeting students’ changing needs and promoting effective learning both in a theoretical and practical dimension like interior architecture discipline. However, the practical dimension of the discipline cannot be supported in the university environment. During the undergraduate program, the practical training which is tried to be supported by two different internship programs cannot fully meet the requirements of the blended learning. The lack of education program frequently expressed by our graduates and employers is revealed in the practical knowledge and skills dimension of the profession. After a series of meetings for curriculum studies, interviews with the chambers of profession, meetings with interior architects, a gap between the theoretical and practical training modules is seen as a problem in all interior architecture departments. It is thought that this gap can be solved by a new education model which is formed by the cooperation of University-Industry in the concept of blended learning. In this context, it is considered that theoretical and applied knowledge accumulation can be provided by the creation of industry-supported educational environments at the university. In the application process of the Interior Architecture discipline, the use of materials and technical competence will only be possible with the cooperation of industry and participation of students in the production/manufacture processes as observers and practitioners. Wood manufacturing is an important part of interior architecture applications. Wood productions is a sustainable structural process where production details, material knowledge, and process details can be observed in the most effective way. From this point of view, after theoretical training about wooden materials, wood applications and production processes are given to the students, practical training for production/manufacture planning is supported by active participation and observation in the processes. With this blended model, we aimed to develop a training model in which theoretical and practical knowledge related to the production of wood works will be conveyed in a meaningful, lasting way by means of university-industry cooperation. The project is carried out in Ankara with Arcora Architecture and Furniture Company and Başkent University Department of Interior Design where university-industry cooperation is realized. Within the scope of the project, every week the video of that week’s lecture is recorded and prepared to be disseminated by digital medias such as Udemy. In this sense, the program is not only developed by the project participants, but also other institutions and people who are trained and practiced in the field of design. Both academicians from University and at least 15-year experienced craftsmen in the wood metal and dye sectors are preparing new training reference documents for interior architecture undergraduate programs. These reference documents will be a model for other Interior Architecture departments of the universities and will be used for creating an online education module.

Keywords: blended learning, interior design, sustainable training, effective learning.

Procedia PDF Downloads 136
365 An Introspective look into Hotel Employees Career Satisfaction

Authors: Anastasios Zopiatis, Antonis L. Theocharous

Abstract:

In the midst of a fierce war for talent, the hospitality industry is seeking new and innovative ways to enrich its image as an employer of choice and not a necessity. Historically, the industry’s professions are portrayed as ‘unattractive’ due to their repetitious nature, long and unsocial working schedules, below average remunerations, and the mental and physical demands of the job. Aligning with the industry, hospitality and tourism scholars embarked on a journey to investigate pertinent topics with the aim of enhancing our conceptual understanding of the elements that influence employees at the hospitality world of work. Topics such as job involvement, commitment, job and career satisfaction, and turnover intentions became the focal points in a multitude of relevant empirical and conceptual investigations. Nevertheless, gaps or inconsistencies in existing theories, as a result of both the volatile complexity of the relationships governing human behavior in the hospitality workplace, and the academic community’s unopposed acceptance of theoretical frameworks mainly propounded in the United States and United Kingdom years ago, necessitate our continuous vigilance. Thus, in an effort to enhance and enrich the discourse, we set out to investigate the relationship between intrinsic and extrinsic job satisfaction traits and the individual’s career satisfaction, and subsequent intention to remain in the hospitality industry. Reflecting on existing literature, a quantitative survey was developed and administered, face-to-face, to 650 individuals working as full-time employees in 4- and 5- star hotel establishments in Cyprus, whereas a multivariate statistical analysis method, namely Structural Equation Modeling (SEM), was utilized to determine whether relationships existed between constructs as a means to either accept or reject the hypothesized theory. Findings, of interest to both industry stakeholders and academic scholars, suggest that the individual’s future intention to remain within the industry is primarily associated with extrinsic job traits. Our findings revealed that positive associations exist between extrinsic job traits, and both career satisfaction and future intention. In contrast, when investigating the relationship of intrinsic traits, a positive association was revealed only with career satisfaction. Apparently, the local industry’s environmental factors of seasonality, excessive turnover, overdependence on seasonal, and part-time migrant workers, prohibit industry stakeholders in effectively investing the time and resources in the development and professional growth of their employees. Consequently intrinsic job satisfaction factors such as advancement, growth, and achievement, take backstage to the more materialistic extrinsic factors. Findings from the subsequent mediation analysis support the notion that intrinsic traits can positively influence future intentions indirectly only through career satisfaction, whereas extrinsic traits can positively impact both career satisfaction and future intention both directly and indirectly.

Keywords: career satisfaction, Cyprus, hotel employees, structural equation modeling, SEM

Procedia PDF Downloads 287
364 Iron Oxide Reduction Using Solar Concentration and Carbon-Free Reducers

Authors: Bastien Sanglard, Simon Cayez, Guillaume Viau, Thomas Blon, Julian Carrey, Sébastien Lachaize

Abstract:

The need to develop clean production processes is a key challenge of any industry. Steel and iron industries are particularly concerned since they emit 6.8% of global anthropogenic greenhouse gas emissions. One key step of the process is the high-temperature reduction of iron ore using coke, leading to large amounts of CO2 emissions. One route to decrease impacts is to get rid of fossil fuels by changing both the heat source and the reducer. The present work aims at investigating experimentally the possibility to use concentrated solar energy and carbon-free reducing agents. Two sets of experimentations were realized. First, in situ X-ray diffraction on pure and industrial powder of hematite was realized to study the phase evolution as a function of temperature during reduction under hydrogen and ammonia. Secondly, experiments were performed on industrial iron ore pellets, which were reduced by NH3 or H2 into a “solar furnace” composed of a controllable 1600W Xenon lamp to simulate and control the solar concentrated irradiation of a glass reactor and of a diaphragm to control light flux. Temperature and pressure were recorded during each experiment via thermocouples and pressure sensors. The percentage of iron oxide converted to iron (called thereafter “reduction ratio”) was found through Rietveld refinement. The power of the light source and the reduction time were varied. Results obtained in the diffractometer reaction chamber show that iron begins to form at 300°C with pure Fe2O3 powder and 400°C with industrial iron ore when maintained at this temperature for 60 minutes and 80 minutes, respectively. Magnetite and wuestite are detected on both powders during the reduction under hydrogen; under ammonia, iron nitride is also detected for temperatures between400°C and 600°C. All the iron oxide was converted to iron for a reaction of 60 min at 500°C, whereas a conversion ratio of 96% was reached with industrial powder for a reaction of 240 min at 600°C under hydrogen. Under ammonia, full conversion was also reached after 240 min of reduction at 600 °C. For experimentations into the solar furnace with iron ore pellets, the lamp power and the shutter opening were varied. An 83.2% conversion ratio was obtained with a light power of 67 W/cm2 without turning over the pellets. Nevertheless, under the same conditions, turning over the pellets in the middle of the experiment permits to reach a conversion ratio of 86.4%. A reduction ratio of 95% was reached with an exposure of 16 min by turning over pellets at half time with a flux of 169W/cm2. Similar or slightly better results were obtained under an ammonia reducing atmosphere. Under the same flux, the highest reduction yield of 97.3% was obtained under ammonia after 28 minutes of exposure. The chemical reaction itself, including the solar heat source, does not produce any greenhouse gases, so solar metallurgy represents a serious way to reduce greenhouse gas emission of metallurgy industry. Nevertheless, the ecological impact of the reducers must be investigated, which will be done in future work.

Keywords: solar concentration, metallurgy, ammonia, hydrogen, sustainability

Procedia PDF Downloads 138
363 Installation of an Inflatable Bladder and Sill Walls for Riverbank Erosion Protection and Improved Water Intake Zone Smokey Hill River – Salina, Kansas

Authors: Jeffrey A. Humenik

Abstract:

Environmental, Limited Liability Corporation (EMR) provided civil construction services to the U.S. Army Corps of Engineers, Kansas City District, for the placement of a protective riprap blanket on the west bank of the Smoky Hill River, construction of 2 shore abutments and the construction of a 140 foot long sill wall spanning the Smoky Hill River in Salina, Kansas. The purpose of the project was to protect the riverbank from erosion and hold back water to a specified elevation, creating a pool to ensure adequate water intake for the municipal water supply. Geotextile matting and riprap were installed for streambank erosion protection. An inflatable bladder (AquaDam®) was designed to the specific river dimension and installed to divert the river and allow for dewatering during the construction of the sill walls and cofferdam. AquaDam® consists of water filled polyethylene tubes to create aqua barriers and divert water flow or prevent flooding. A challenge of the project was the fact that 100% of the sill wall was constructed within an active river channel. The threat of flooding of the work area, damage to the aqua dam by debris, and potential difficulty of water removal presented a unique set of challenges to the construction team. Upon completion of the West Sill Wall, floating debris punctured the AquaDam®. The manufacturing and delivery of a new AquaDam® would delay project completion by at least 6 weeks. To keep the project ahead of schedule, the decision was made to construct an earthen cofferdam reinforced with rip rap for the construction of the East Abutment and East Sill Wall section. During construction of the west sill wall section, a deep scour hole was encountered in the wall alignment that prevented EMR from using the natural rock formation as a concrete form for the lower section of the sill wall. A formwork system was constructed, that allowed the west sill wall section to be placed in two horizontal lifts of concrete poured on separate occasions. The first sectional lift was poured to fill in the scour hole and act as a footing for the second sectional lift. Concrete wall forms were set on the first lift and anchored to the surrounding riverbed in a manner that the second lift was poured in a similar fashion as a basement wall. EMR’s timely decision to keep the project moving toward completion in the face of changing conditions enabled project completion two (2) months ahead of schedule. The use of inflatable bladders is an effective and cost-efficient technology to divert river flow during construction. However, a secondary plan should be part of project design in the event debris transported by river punctures or damages the bladders.

Keywords: abutment, AquaDam®, riverbed, scour

Procedia PDF Downloads 155
362 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 291
361 Experiences of Youth in Learning About Healthy Intimate Relationships: An Institutional Ethnography

Authors: Anum Rafiq

Abstract:

Adolescence is a vulnerable period for youth across the world. It is a period of new learning with opportunities to understand and develop perspectives on health and well-being. With youth beginning to engage in intimate relationships at an earlier age in the 21st century, concentrating on the learning opportunity they have in school is paramount. The nature of what has been deemed important to teach in schools has changed throughout history, and the focus has shifted from home/family skills to teaching youth how to be competitive in the job market. Amidst this emphasis, opportunities for them exist to learn about building healthy intimate relationships, one of the foundational elements of most people’s lives. Using an Institutional Ethnography (IE), the lived experiences of youth in how they understand intimate relationships and how their learning experience is organized through the high school Health and Physical Education (H&PE) course is explored. An empirical inquiry into how the actual work of teachers and youth are socially organized by a biomedical, employment-related, and efficiency-based discourse is provided. Through thirty-two qualitative interviews with teachers and youth, a control of ruling relations such as institutional accountability circuits, performance reports, and timetabling over the experience of teachers and youth is found. One of the facets of the institutional accountability circuit is through the social organization of teaching and learning about healthy intimate relationships being framed through a biomedical discourse. In addition, the role of a hyper-focus on performance and evaluation is found as paramount in situating healthy intimacy discussions as inferior to neoliberally charged productivity measures such as employment skills. Lastly, due to the nature of institutional policies such as regulatory guidelines, teachers are largely influenced to avoid diving into discussions deemed risky or taboo by society, such as healthy intimacy in adolescence. The findings show how texts such as the H&PE curriculum, the Ontario College of Teachers (OCT) guidelines, Ministry of Education Performance Reports, and the timetable organize the day-to-day activities of teachers and students and reproduce different disjunctures for youth. This disjuncture includes some of their experiences being subordinated, difficulty relating to curriculum, and an experience of healthy living discussions being skimmed over across sites. The findings detail that the experience of youth in learning about healthy intimate relationships is not akin to the espoused vision outlined in policy documents such as the H&PE (2015) curriculum policy. These findings have implications for policymakers, activists, and school administration alike, which call for an investigation into who is in power when it comes to youth’s learning needs, as a pivotal period where youth can be equipped with life-changing knowledge is largely underutilized. A restructuring of existing institutional practices that allow for the social and institutional flexibility required to broach the topic of healthy intimacy in a comprehensive manner is required.

Keywords: health policy, intimate relationships, youth, education, ruling relations, sexual education, violence prevention

Procedia PDF Downloads 71
360 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.

Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction

Procedia PDF Downloads 245
359 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 71
358 Digital Subsistence of Cultural Heritage: Digital Media as a New Dimension of Cultural Ecology

Authors: Dan Luo

Abstract:

With the climate change can exacerbate exposure of cultural heritage to climatic stressors, scholars pin their hope on digital technology can help the site avoid surprises. Virtual museum has been regarded as a highly effective technology that enables people to gain enjoyable visiting experience and immersive information about cultural heritage. The technology clearly reproduces the images of the tangible cultural heritage, and the aesthetic experience created by new media helps consumers escape from the realistic environment full of uncertainty. The new cultural anchor has appeared outside the cultural sites. This article synthesizes the international literature on the virtual museum by developing diagrams of Citespace focusing on the tangible cultural heritage and the alarmingly situation has emerged in the process of resolving climate change: (1) Digital collections are the different cultural assets for public. (2) The media ecology change people ways of thinking and meeting style of cultural heritage. (3) Cultural heritage may live forever in the digital world. This article provides a typical practice information to manage cultural heritage in a changing climate—the Dunhuang Mogao Grottoes in the far northwest of China, which is a worldwide cultural heritage site famous for its remarkable and sumptuous murals. This monument is a typical synthesis of art containing 735 Buddhist temples, which was listed by UNESCO as one of the World Cultural Heritage sites. The caves contain some extraordinary examples of Buddhist art spanning a period of 1,000 years - the architectural form, the sculptures in the caves, and the murals on the walls, all together constitute a wonderful aesthetic experience. Unfortunately, this magnificent treasure cave has been threatened by increasingly frequent dust storms and precipitation. The Dunhuang Academy has been using digital technology since the last century to preserve these immovable cultural heritages, especially the murals in the caves. And then, Dunhuang culture has become a new media culture after introduce the art to the world audience through exhibitions, VR, video, etc. The paper chooses qualitative research method that used Nvivo software to encode the collected material to answer this question. The author paid close attention to the survey in Dunhuang City, including participated in 10 exhibition and 20 salons that are Dunhuang-themed on network. What’s more, 308 visitors were interviewed who are fans of the art and have experienced Dunhuang culture online(6-75 years).These interviewees have been exposed to Dunhuang culture through different media, and they are acutely aware of the threat to this cultural heritage. The conclusion is that the unique halo of the cultural heritage was always emphasized, and digital media breeds twin brothers of cultural heritage. In addition, the digital media make it possible for cultural heritage to reintegrate into the daily life of the masses. Visitors gain the opportunity to imitate the mural figures through enlarged or emphasized images but also lose the perspective of understanding the whole cultural life. New media construct a new life aesthetics apart from the Authorized heritage discourse.

Keywords: cultural ecology, digital twins, life aesthetics, media

Procedia PDF Downloads 81
357 Succinct Perspective on the Implications of Intellectual Property Rights and 3rd Generation Partnership Project in the Rapidly Evolving Telecommunication Industry

Authors: Arnesh Vijay

Abstract:

Ever since its early introduction in the late 1980s, the mobile industry has been rapidly evolving with each passing year. The development witnessed is not just in its ability to support diverse applications, but also its extension into diverse technological means to access and offer various services to users. Amongst the various technologies present, radio systems have clearly emerged as a strong contender, due to its fine attributes of accessibility, reachability, interactiveness, and cost efficiency. These advancements have no doubt guaranteed unprecedented ease, utility and sophistication to the cell phone users, but caused uncertainty due to the interdependence of various systems, making it extremely complicated to exactly map concepts on to 3GPP (3rd Generation Partnership Project) standards. Although the close interrelation and interdependence of intellectual property rights and mobile standard specifications have been widely acknowledged by the technical and legal community; there, however, is a requirement for clear distinction between the scope and future-proof of inventions to influence standards and its market place adoptability. For this, collaborative work is required between intellectual property professionals, researchers, standardization specialists and country specific legal experts. With the evolution into next generation mobile technology, i.e., to 5G systems, there is a need for further work to be done in this field, which has been felt now more than ever before. Based on these lines, this poster will briefly describe the importance of intellectual property rights in the European market. More specifically, will analyse the role played by intellectual property in various standardization institutes, such as 3GPP (3rd generation partnership project) and ITU (International Telecommunications Union). The main intention: to ensure the scope and purpose is well defined, and concerned parties on all four sides are well informed on the clear significance of good proposals which not only bring economic revenue to the company but those that are capable of improving the technology and offer better services to mankind. The poster will comprise different sections. The first segment begins with a background on the rapidly evolving mobile technology, with a brief insight on the industrial impact of standards and its relation to intellectual property rights. Next, section two will succinctly outline the interplay between patents and standards; explicitly discussing the ever changing and rapidly evolving relationship between the two sectors. Then the remaining sections will examine ITU and its role played in international standards development, touching upon the various standardization process and the common patent policies and related guidelines. Finally, it proposes ways to improve the collaboration amongst various sectors for a more evolved and sophisticated next generation mobile telecommunication system. The sole purpose here is to discuss methods to reduce the gap and enhance the exchange of information between the two sectors to offer advanced technologies and services to mankind.

Keywords: mobile technology, mobile standards, intellectual property rights, 3GPP

Procedia PDF Downloads 127
356 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 332
355 From the Classroom to Digital Learning Environments: An Action Research on Pedagogical Practices in Higher Education

Authors: Marie Alexandre, Jean Bernatchez

Abstract:

This paper focuses on the complexity of the face-to-face-to-distance learning transition process. Our research action aims to support the process of transition from classroom to distance learning for teachers in higher education with regard to pedagogical practices that can meet the various needs of students using digital learning environments. In Quebec and elsewhere in the world, the advent of digital education is helping to transform teaching, which is significantly changing the role of teachers. While distance education implies a dissociation of teaching and learning to a variable degree in space and time, distance education (DE) is becoming more and increasingly becoming a preferred option for maintaining the delivery of certain programs and providing access to programs and to provide access to quality activities throughout Quebec. Given the impact of teaching practices on educational success, this paper reports on the results of three research objectives: 1) To document teachers' knowledge of teaching in distance education through the design, experimentation and production of a repertoire of the determinants of pedagogical practices in response to students' needs. 2) Explain, according to a gendered logic, the adequacy between the pedagogical practices implemented in distance learning and the response to the profiles and needs expressed by students using digital learning environments; 3) Produce a model of a support approach during the process of transition from classroom to distance learning at the college level. A mixed methodology, i.e., a quantitative component (questionnaire survey) and a qualitative component (explanatory interviews and living lab) was used in cycles that were part of an ongoing validation process. The intervention includes the establishment of a professional collaboration group, webinars training webinars for the participating teachers on the didactic issue of knowledge-teaching in FAD, the didactic use of technologies, and the differentiated socialization models of educational success in college education. All of the tools developed will be used by partners in the target environment as well as by all teacher educators, students in initial teacher training, practicing teachers, and the general public. The results show that access to training leading to qualifications and commitment to educational success reflects the existing links between the people in the educational community. The relational stakes of being present in distance education take on multiple configurations and different dimensions of learning testify to needs and realities that are sometimes distinct depending on the life cycle. This project will be of interest to partners in the targeted field as well as to all teacher trainers, students in initial teacher training, practicing college teachers, and to university professors. The entire educational community will benefit from digital resources in education. The scientific knowledge resulting from this action research will benefit researchers in the fields of pedagogy, didactics, teacher training and pedagogy in higher education in a digital context.

Keywords: action research, didactics, digital learning environment, distance learning, higher education, pedagogy technological, pedagogical content knowledge

Procedia PDF Downloads 87
354 Investigation of the Cognition Factors of Fire Response Performances Based on Survey

Authors: Jingjing Yan, Gengen He, Anahid Basiri

Abstract:

The design of an indoor navigation system for fire evacuation support requires not only physical feasibility but also a relatively thorough consideration of the human factors. This study has taken a survey to investigate the fire response performances (FRP) of the indoor occupants in age of 20s, virtually in an environment for their routine life, focusing on the aspects of indoor familiarity (spatial cognition), psychological stress and decision makings. For indoor familiarity, it is interested in three factors, i.e., the familiarity to exits and risky places as well as the satisfaction degree of the current indoor sign installation. According to the results, males have a higher average familiarity with the indoor exits while both genders have a relatively low level of risky place awareness. These two factors are positively correlated with the satisfaction degree of the current installation of the indoor signs, and this correlation is more evident for the exit familiarity. The integration of the height factor with the other two indoor familiarity factors can improve the degree of indoor sign satisfaction. For psychological stress, this study concentrates on the situated cognition of moving difficulty, nervousness, and speed reduction when using a bending posture during the fire evacuation to avoid smoke inhalation. The results have shown that both genders have a similar mid-level of hardness sensation. The females have a higher average level of nervousness, while males have a higher average level of speed reduction sensation. This study has assumed that the growing indoor spatial cognition can help ease the psychological hardness and nervousness. However, it only seems to be true after reaching a certain level. When integrating the effects from indoor familiarity and the other two psychological factors, the correlation to the sensation of speed change can be strengthened, based on a stronger positive correlation with the integrated factors. This study has also investigated the participants’ attitude to the navigation support during evacuation, and the majority of the participants have shown positive attitudes. For following the guidance under some extreme cases, i.e., changing to a longer path and to an alternative exit, the majority of the participants has shown the confidence of keeping trusting the guidance service. These decisions are affected by the combined influences from indoor familiarity, psychological stress, and attitude of using navigation service. For the decision time of the selected extreme cases, it costs more time in average for deciding to use a longer route than to use an alternative exit, and this situation is more evident for the female participants. This requires further considerations when designing a personalized smartphone-based navigation app. This study has also investigated the calming factors for people being trapped during evacuation. The top consideration is the distance to the nearest firefighters, and the following considerations are the current fire conditions in the surrounding environment and the locations of all firefighters. The ranking of the latter two considerations is very gender-dependent according to the results.

Keywords: fire response performances, indoor spatial cognition, situated cognition, survey analysis

Procedia PDF Downloads 143
353 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 47
352 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 387
351 University Curriculum Policy Processes in Chile: A Case Study

Authors: Victoria C. Valdebenito

Abstract:

Located within the context of accelerating globalization in the 21st-century knowledge society, this paper focuses on one selected university in Chile at which radical curriculum policy changes have been taking place, diverging from the traditional curriculum in Chile at the undergraduate level as a section of a larger investigation. Using a ‘policy trajectory’ framework, and guided by the interpretivist approach to research, interview transcripts and institutional documents were analyzed in relation to the meso (university administration) and the micro (academics) level. Inside the case study, participants from the university administration and academic levels were selected both via snow-ball technique and purposive selection, thus they had different levels of seniority, with some participating actively in the curriculum reform processes. Guided by an interpretivist approach to research, documents and interview transcripts were analyzed to reveal major themes emerging from the data. A further ‘bigger picture’ analysis guided by critical theory was then undertaken, involving interrogation of underlying ideologies and how political and economic interests influence the cultural production of policy. The case-study university was selected because it represents a traditional and old case of university setting in the country, undergoing curriculum changes based on international trends such as the competency model and the liberal arts. Also, it is representative of a particular socioeconomic sector of the country. Access to the university was gained through email contact. Qualitative research methods were used, namely interviews and analysis of institutional documents. In all, 18 people were interviewed. The number was defined by when the saturation criterion was met. Semi-structured interview schedules were based on the four research questions about influences, policy texts, policy enactment and longer-term outcomes. Triangulation of information was used for the analysis. While there was no intention to generalize the specific findings of the case study, the results of the research were used as a focus for engagement with broader themes, often evident in global higher education policy developments. The research results were organized around major themes in three of the four contexts of the ‘policy trajectory’. Regarding the context of influences and the context of policy text production, themes relate to hegemony exercised by first world countries’ universities in the higher education field, its associated neoliberal ideology, with accountability and the discourse of continuous improvement, the local responses to those pressures, and the value of interdisciplinarity. Finally, regarding the context of policy practices and effects (enactment), themes emerged around the impacts of the curriculum changes on university staff, students, and resistance amongst academics. The research concluded with a few recommendations that potentially provide ‘food for thought’ beyond the localized settings of this study, as well as possibilities for further research.

Keywords: curriculum, global-local dynamics, higher education, policy, sociology of education

Procedia PDF Downloads 78
350 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst

Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci

Abstract:

The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.

Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel

Procedia PDF Downloads 155
349 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 264
348 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 204
347 The Digital Desert in Global Business: Digital Analytics as an Oasis of Hope for Sub-Saharan Africa

Authors: David Amoah Oduro

Abstract:

In the ever-evolving terrain of international business, a profound revolution is underway, guided by the swift integration and advancement of disruptive technologies like digital analytics. In today's international business landscape, where competition is fierce, and decisions are data-driven, the essence of this paper lies in offering a tangible roadmap for practitioners. It is a guide that bridges the chasm between theory and actionable insights, helping businesses, investors, and entrepreneurs navigate the complexities of international expansion into sub-Saharan Africa. This practitioner paper distils essential insights, methodologies, and actionable recommendations for businesses seeking to leverage digital analytics in their pursuit of market entry and expansion across the African continent. What sets this paper apart is its unwavering focus on a region ripe with potential: sub-Saharan Africa. The adoption and adaptation of digital analytics are not mere luxuries but essential strategic tools for evaluating countries and entering markets within this dynamic region. With the spotlight firmly fixed on sub-Saharan Africa, the aim is to provide a compelling resource to guide practitioners in their quest to unearth the vast opportunities hidden within sub-Saharan Africa's digital desert. The paper illuminates the pivotal role of digital analytics in providing a data-driven foundation for market entry decisions. It highlights the ability to uncover market trends, consumer behavior, and competitive landscapes. By understanding Africa's incredible diversity, the paper underscores the importance of tailoring market entry strategies to account for unique cultural, economic, and regulatory factors. For practitioners, this paper offers a set of actionable recommendations, including the creation of cross-functional teams, the integration of local expertise, and the cultivation of long-term partnerships to ensure sustainable market entry success. It advocates for a commitment to continuous learning and flexibility in adapting strategies as the African market evolves. This paper represents an invaluable resource for businesses, investors, and entrepreneurs who are keen on unlocking the potential of digital analytics for informed market entry in Africa. It serves as a guiding light, equipping practitioners with the essential tools and insights needed to thrive in this dynamic and diverse continent. With these key insights, methodologies, and recommendations, this paper is a roadmap to prosperous and sustainable market entry in Africa. It is vital for anyone looking to harness the transformational potential of digital analytics to create prosperous and sustainable ventures in a region brimming with promise. In the ever-advancing digital age, this practitioner paper becomes a lodestar, guiding businesses and visionaries toward success amidst the unique challenges and rewards of sub-Saharan Africa's international business landscape.

Keywords: global analytics, digital analytics, sub-Saharan Africa, data analytics

Procedia PDF Downloads 72
346 Forced Migrants in Israel and Their Impact on the Urban Structure of Southern Neighborhoods of Tel Aviv

Authors: Arnon Medzini, Lilach Lev Ari

Abstract:

Migration, the driving force behind increased urbanization, has made cities much more diverse places to live in. Nearly one-fifth of all migrants live in the world’s 20 largest cities. In many of these global cities, migrants constitute over a third of the population. Many of contemporary migrants are in fact ‘forced migrants,’ pushed from their countries of origin due to political or ethnic violence and persecution or natural disasters. During the past decade, massive numbers of labor migrants and asylum seekers have migrated from African countries to Israel via Egypt. Their motives for leaving their countries of origin include ongoing and bloody wars in the African continent as well as corruption, severe conditions of poverty and hunger, and economic and political disintegration. Most of the African migrants came to Israel from Eritrea and Sudan as they saw Israel the closest natural geographic asylum to Africa; soon they found their way to the metropolitan Tel-Aviv area. There they concentrated in poor neighborhoods located in the southern part of the city, where they live under conditions of crowding, poverty, and poor sanitation. Today around 45,000 African migrants reside in these neighborhoods, and yet there is no legal option for expelling them due to dangers they might face upon returning to their native lands. Migration of such magnitude to the weakened neighborhoods of south Tel-Aviv can lead to the destruction of physical, social and human infrastructures. The character of the neighborhoods is changing, and the local population is the main victim. These local residents must bear the brunt of the failure of both authorities and the government to handle the illegal inhabitants. The extremely crowded living conditions place a heavy burden on the dilapidated infrastructures in the weakened areas where the refugees live and increase the distress of the veteran residents of the neighborhoods. Some problems are economic and some stem from damage to the services the residents are entitled to, others from a drastic decline in their standard of living. Even the public parks no longer serve the purpose for which they were originally established—the well-being of the public and the neighborhood residents; they have become the main gathering place for the infiltrators and a center of crime and violence. Based on secondary data analysis (for example: The Israel’s Population, Immigration and Border Authority, the hotline for refugees and migrants), the objective of this presentation is to discuss the effects of forced migration to Tel Aviv on the following tensions: between the local population and the immigrants; between the local population and the state authorities, and between human rights groups vis-a-vis nationalist local organizations. We will also describe the changes which have taken place in the urban infrastructure of the city of Tel Aviv, and discuss the efficacy of various Israeli strategic trajectories when handling human problems arising in the marginal urban regions where the forced migrant population is concentrated.

Keywords: African asylum seekers, forced migrants, marginal urban regions, urban infrastructure

Procedia PDF Downloads 252