Search results for: Solve Elmstahl
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1700

Search results for: Solve Elmstahl

80 Use of Extended Conversation to Boost Vocabulary Knowledge and Soft Skills in English for Employment Classes

Authors: James G. Matthew, Seonmin Huh, Frank X. Bennett

Abstract:

English for Specific Purposes, ESP, aims to equip learners with necessary English language skills. Many ESP programs address language skills for job performance, including reading job related documents and oral proficiency. Within ESP is English for occupational purposes, EOP, which centers around developing communicative competence for the globalized workplace. Many ESP and EOP courses lack the content needed to assist students to progress at work, resulting in the need to create lexical compilation for different professions. It is important to teach communicative competence and soft skills for real job-related problem situations and address the complexities of the real world to help students to be successful in their professions. ESP and EOP research is therefore trying to balance both profession-specific educational contents as well as international multi-disciplinary language skills for the globalized workforce. The current study will build upon the existing discussion by developing pedagogy to assist students in their career through developing a strong practical command of relevant English vocabulary. Our research question focuses on the pedagogy two professors incorporated in their English for employment courses. The current study is a qualitative case study on the modes of teaching delivery for EOP in South Korea. Two foreign professors teaching at two different universities in South Korea volunteered for the study to explore their teaching practices. Both professors’ curriculums included the components of employment-related concept vocabulary, business presentations, CV/resume and cover letter preparation, and job interview preparation. All the pre-made recorded video lectures, live online class sessions with students, teachers’ lesson plans, teachers’ class materials, students’ assignments, and midterm and finals video conferences were collected for data analysis. The study then focused on unpacking representative patterns in their teaching methods. The professors used their strengths as native speakers to extend the class discussion from narrow and restricted conversations to giving students broader opportunities to practice authentic English conversation. The methods of teaching utilized three main steps to extend the conversation. Firstly, students were taught concept vocabulary. Secondly, the vocabulary was then combined in speaking activities where students had to solve scenarios, and the students were required to expand on the given forms of words and language expressions. Lastly, the students had conversations in English, using the language learnt. The conversations observed in both classes were those of authentic, expanded English communication and this way of expanding concept vocabulary lessons into extended conversation is one representative pedagogical approach that both professors took. Extended English conversation, therefore, is crucial for EOP education.

Keywords: concept vocabulary, english as a foreign language, english for employment, extended conversation

Procedia PDF Downloads 74
79 Mapping and Measuring the Vulnerability Level of the Belawan District Community in Encountering the Rob Flood Disaster

Authors: Dessy Pinem, Rahmadian Sembiring, Adanil Bushra

Abstract:

Medan Belawan is one of the subdistricts of 21 districts in Medan. Medan Belawan Sub-district is directly adjacent to the Malacca Strait in the North. Due to its direct border with the Malacca Strait, the problem in this sub-district, which has continued for many years, is a flood of rob. In 2015, rob floods inundated Sicanang urban village, Belawan I urban village, Belawan Bahagia urban village and Bagan Deli village. The extent of inundation in the flood of rob that occurred in September 2015 reached 540, 938 ha. Rob flood is a phenomenon where the sea water is overflowing into the mainland. Rob floods can also be interpreted as a puddle of water on the coastal land that occurs when the tidal waters. So this phenomenon will inundate parts of the coastal plain or lower place of high tide sea level. Rob flood is a daily disaster faced by the residents in the district of Medan Belawan. Rob floods can happen every month and last for a week. The flood is not only the residents' houses, the flood also soaked the main road to Belawan Port reaching 50 cm. To deal with the problems caused by the flood and to prepare coastal communities to face the character of coastal areas, it is necessary to know the vulnerability of the people who are always the victims of the rob flood. Are the people of Medan Belawan sub-district, especially in the flood-affected villages, able to cope with the consequences of the floods? To answer this question, it is necessary to assess the vulnerability of the Belawan District community in the face of the flood disaster. This research is descriptive, qualitative and quantitative. Data were collected by observation, interview and questionnaires in 4 urban villages often affected by rob flood. The vulnerabilities measured are physical, economic, social, environmental, organizational and motivational vulnerabilities. For vulnerability in the physical field, the data collected is the distance of the building, floor area ratio, drainage, and building materials. For economic vulnerability, data collected are income, employment, building ownership, and insurance ownership. For the vulnerability in the social field, the data collected is education, number of family members, children, the elderly, gender, training for disasters, and how to dispose of waste. For the vulnerability in the field of organizational data collected is the existence of organizations that advocate for the victims, their policies and laws governing the handling of tidal flooding. The motivational vulnerability is seen from the information center or question and answer about the rob flood, and the existence of an evacuation plan or path to avoid disaster or reduce the victim. The results of this study indicate that most people in Medan Belawan sub-district have a high-level vulnerability in physical, economic, social, environmental, organizational and motivational fields. They have no access to economic empowerment, no insurance, no motivation to solve problems and only hope to the government, not to have organizations that support and defend them, and have physical buildings that are easily destroyed by rob floods.

Keywords: disaster, rob flood, Medan Belawan, vulnerability

Procedia PDF Downloads 107
78 Intended Use of Genetically Modified Organisms, Advantages and Disadvantages

Authors: Pakize Ozlem Kurt Polat

Abstract:

GMO (genetically modified organism) is the result of a laboratory process where genes from the DNA of one species are extracted and artificially forced into the genes of an unrelated plant or animal. This technology includes; nucleic acid hybridization, recombinant DNA, RNA, PCR, cell culture and gene cloning techniques. The studies are divided into three groups of properties transferred to the transgenic plant. Up to 59% herbicide resistance characteristic of the transfer, 28% resistance to insects and the virus seems to be related to quality characteristics of 13%. Transgenic crops are not included in the commercial production of each product; mostly commercial plant is soybean, maize, canola, and cotton. Day by day increasing GMO interest can be listed as follows; Use in the health area (Organ transplantation, gene therapy, vaccines and drug), Use in the industrial area (vitamins, monoclonal antibodies, vaccines, anti-cancer compounds, anti -oxidants, plastics, fibers, polyethers, human blood proteins, and are used to produce carotenoids, emulsifiers, sweeteners, enzymes , food preservatives structure is used as a flavor enhancer or color changer),Use in agriculture (Herbicide resistance, Resistance to insects, Viruses, bacteria, fungi resistance to disease, Extend shelf life, Improving quality, Drought , salinity, resistance to extreme conditions such as frost, Improve the nutritional value and quality), we explain all this methods step by step in this research. GMO has advantages and disadvantages, which we explain all of them clearly in full text, because of this topic, worldwide researchers have divided into two. Some researchers thought that the GMO has lots of disadvantages and not to be in use, some of the researchers has opposite thought. If we look the countries law about GMO, we should know Biosafety law for each country and union. For this Biosecurity reasons, the problems caused by the transgenic plants, including Turkey, to minimize 130 countries on 24 May 2000, ‘the United Nations Biosafety Protocol’ signed nudes. This protocol has been prepared in addition to Cartagena Biosafety Protocol entered into force on September 11, 2003. This protocol GMOs in general use by addressing the risks to human health, biodiversity and sustainable transboundary movement of all GMOs that may affect the prevention, transit covers were dealt and used. Under this protocol we have to know the, ‘US Regulations GMO’, ‘European Union Regulations GMO’, ‘Turkey Regulations GMO’. These three different protocols have different applications and rules. World population increasing day by day and agricultural fields getting smaller for this reason feeding human and animal we should improve agricultural product yield and quality. Scientists trying to solve this problem and one solution way is molecular biotechnology which is including the methods of GMO too. Before decide to support or against the GMO, should know the GMO protocols and it effects.

Keywords: biotechnology, GMO (genetically modified organism), molecular marker

Procedia PDF Downloads 219
77 The Psycho-Linguistic Aspect of Translation Gaps in Teaching English for Specific Purposes

Authors: Elizaveta Startseva, Elena Notina, Irina Bykova, Valentina Ulyumdzhieva, Natallia Zhabo

Abstract:

With the various existing models of intercultural communication that contain a vast number of stages for foreign language acquisition, there is a need for conscious perception of the foreign culture. Such a process is associated with the emergence of linguistic conflict with the consistent students’ desire to solve the problem of the language differences, along with cultural discrepancies. The aim of this study is to present the modern ways and methods of removing psycholinguistic conflict through skills development in professional translation and intercultural communication. The study was conducted in groups of 1-4-year students of Medical Institute and Agro-Technological Institute RUDN university. In the course of training, students got knowledge in such disciplines as basic grammar and vocabulary of the English language, phonetics, lexicology, introduction to linguistics, theory of translation, annotating and referencing media texts and texts in specialty. The students learned to present their research work, participated in the University and exit conferences with their reports and presentations. Common strategies of removing linguistic and cultural conflict can be attributed to the development of such abilities of a language personality as a commitment to communication and cooperation, the formation of cultural awareness and empathy of other cultures of the individual, realistic self-esteem, emotional stability, tolerance, etc. The process of mastering a foreign language and culture of the target language leads to a reduplication of linguistic identity, which leads to successive formation of the so-called 'secondary linguistic personality.' In our study, we tried to approach the problem comprehensively, focusing on the translation gaps for technical and non-technical language still missing such a typology which could classify all of the lacunas on the same principle. When obtaining the background knowledge, students learn to overcome the difficulties posed by the national-specific and linguistic differences of cultures in contact, i.e., to eliminate the gaps (to fill in and compensate). Compensation gaps is a means of fixing it, the initial phase of elimination, followed in some cases and some not is filling semantic voids (plenus). The concept of plenus occurs in most cases of translation gaps, for example in the transcription and transliteration of (intercultural and exoticism), the replication (reproduction of the morphemic structure of words or idioms. In all the above cases the task of the translator is to ensure an identical response of the receptors of the original and translated texts, since any statement is created with the goal of obtaining communicative effect, and hence pragmatic potential is the most important part of its contents. The practical value of our work lies in improving the methodology of teaching English for specific purposes on the basis of psycholinguistic concept of the secondary language personality.

Keywords: lacuna, language barrier, plenus, secondary language personality

Procedia PDF Downloads 265
76 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 116
75 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem

Authors: Markandeya Kunchi

Abstract:

Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.

Keywords: collaborative method, e-commerce, product management method, usability testing

Procedia PDF Downloads 98
74 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica

Authors: Ayesha M. Facey

Abstract:

The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.

Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics

Procedia PDF Downloads 173
73 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 317
72 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 126
71 The Connection between Qom Seminaries and Interpretation of Sacred Sources in Ja‘farī Jurisprudence

Authors: Sumeyra Yakar, Emine Enise Yakar

Abstract:

Iran presents itself as Islamic, first and foremost, and thus, it can be said that sharī’a is the political and social centre of the states. However, actual practice reveals distinct interpretations and understandings of the sharī’a. The research can be categorised inside the framework of logic in Islamic law and theology. The first task of this paper will be to identify how the sharī’a is understood in Iran by mapping out how the judges apply the law in their respective jurisdictions. The attention will then move from a simple description of the diversity of sharī’a understandings to the question of how that diversity relates to social concepts and cultures. This, of course, necessitates a brief exploration of Iran’s historical background which will also allow for an understanding of sectarian influences and the significance of certain events. The main purpose is to reach an understanding of the process of applying sources to formulate solutions which are in accordance with sharī’a and how religious education is pursued in order to become official judges. Ultimately, this essay will explore the attempts to gain an understanding by linking the practices to the secondary sources of Islamic law. It is important to emphasise that these cultural components of Islamic law must be compatible with the aims of Islamic law and their fundamental sources. The sharī’a consists of more than just legal doctrines (fiqh) and interpretive activities (ijtihād). Its contextual and theoretical framework reveals a close relationship with cultural and historical elements of society. This has meant that its traditional reproduction over time has relied on being embedded into a highly particular form of life. Thus, as acknowledged by pre-modern jurists, the sharī’a encompasses a comprehensive approach to the requirements of justice in legal, historical and political contexts. In theological and legal areas that have the specific authority of tradition, Iran adheres to Shīa’ doctrine, and this explains why the Shīa’ religious establishment maintains a dominant position in matters relating to law and the interpretation of sharī’a. The statements and interpretations of the tradition are distinctly different from sunnī interpretations, and so the use of different sources could be understood as the main reason for the discrepancies in the application of sharī’a between Iran and other Muslim countries. The sharī’a has often accommodated prevailing customs; moreover, it has developed legal mechanisms to all for its adaptation to particular needs and circumstances in society. While jurists may operate within the realm of governance and politics, the moral authority of the sharī’a ensures that these actors legitimate their actions with reference to God’s commands. The Iranian regime enshrines the principle of vilāyāt-i faqīh (guardianship of the jurist) which enables jurists to solve the conflict between law as an ideal system, in theory, and law in practice. The paper aims to show how the religious, educational system works in harmony with the governmental authorities with the concept of vilāyāt-i faqīh in Iran and contributes to the creation of religious custom in the society.

Keywords: guardianship of the jurist (vilāyāt-i faqīh), imitation (taqlīd), seminaries (hawza), Shi’i jurisprudence

Procedia PDF Downloads 201
70 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges

Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström

Abstract:

The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.

Keywords: operator, process control, energy system, sustainability, future control room, skill

Procedia PDF Downloads 60
69 Urban Security through Urban Transformation: Case of Saraycik District

Authors: Emir Sunguroglu, Merve Sunguroglu, Yesim Aliefendioglu, Harun Tanrivermis

Abstract:

Basic human needs range from physiological needs such as food, water and shelter to safety needs such as security, protection from natural disasters and even urban terrorism which are extant and not fulfilled even in urban areas where people live civilly in large communities. These basic needs when arose in urban life lead to a different kind of crime set defined as urban crimes. Urban crimes mostly result from differences between socioeconomic conditions in society. Income inequality increases tendency towards urban crimes. Especially in slum areas and suburbs, urban crimes not only threaten public security but they also affect deliverance of public services. It is highlighted that, construction of urban security against problems caused by urban crimes is not only achieved by involvement of urban security in security of the community but also comprises juridical development and staying above a level of legal standards concurrently. The idea of urban transformation emerged as interventions to demolishment and rebuilding of built environment to solve the unhealthy urban environment, inadequate infrastructure and socioeconomic problems came up during the industrialization process. Considering the probability of urbanization process driving citizens to commit crimes, The United Nations Commission on Human Security’s focus on this theme is conferred to be a proper approach. In this study, the analysis and change in security before, through and after urban transformation, which is one of the tools related to urbanization process, is strived to be discussed through the case of Sincan County Saraycik District. The study also aims to suggest improvements to current legislation on public safety, urban resilience, and urban transformation. In spite of Saraycik District residing in a developing County in Ankara, Turkey, from urbanization perspective as well as socioeconomic and demographic indicators the District exhibits a negative view throughout the County and the country. When related to the county, rates of intentional harm reports, burglary reports, the offense of libel and threat reports and narcotic crime reports are higher. The District is defined as ‘crime hotspot’. Interviews with residents of Saraycik claim that the greatest issue of the neighborhood is Public Order and Security (82.44 %). The District becomes prominent with negative aspects, especially with the presence of unlicensed constructions, occurrence of important social issues such as crime and insecurity and complicated lives of inhabitants from poverty and low standard conditions of living. Additionally, the social structure and demographic properties and crime and insecurity of the field have been addressed in this study. Consequently, it is claimed that urban crime rates were related to level of education, employment and household income, poverty trap, physical condition of housing and structuration, accessibility of public services, security, migration, safety in terms of disasters and emphasized that urban transformation is one of the most important tools in order to provide urban security.

Keywords: urban security, urban crimes, urban transformation, Saraycik district

Procedia PDF Downloads 278
68 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 148
67 Psychodiagnostic Tool Development for Measurement of Social Responsibility in Ukrainian Organizations

Authors: Olena Kovalchuk

Abstract:

How to define the understanding of social responsibility issues by Ukrainian companies is a contravention question. Thus, one of the practical uses of social responsibility is a diagnostic tool development for educational, business or scientific purposes. So the purpose of this research is to develop a tool for measurement of social responsibility in organization. Methodology: A 21-item questionnaire “Organization Social Responsibility Scale” was developed. This tool was adapted for the Ukrainian sample and based on the questionnaire “Perceived Role of Ethics and Social Responsibility” which connects ethical and socially responsible behavior to different aspects of the organizational effectiveness. After surveying the respondents, the factor analysis was made by the method of main compounds with orthogonal rotation VARIMAX. On the basis of the obtained results the 21-item questionnaire was developed (Cronbach’s alpha – 0,768; Inter-Item Correlations – 0,34). Participants: 121 managers at all levels of Ukrainian organizations (57 males; 65 females) took part in the research. Results: Factor analysis showed five ethical dilemmas concerning the social responsibility and profit compatibility in Ukrainian organizations. Below we made an attempt to interpret them: — Social responsibility vs profit. Corporate social responsibility can be a way to reduce operational costs. A firm’s first priority is employees’ morale. Being ethical and socially responsible is the priority of the organization. The most loaded question is "Corporate social responsibility can reduce operational costs". Significant effect of this factor is 0.768. — Profit vs social responsibility. Efficiency is much more important to a firm than ethics or social responsibility. Making the profit is the most important concern for a firm. The dominant question is "Efficiency is much more important to a firm than whether or not the firm is seen as ethical or socially responsible". Significant effect of this factor is 0.793. — A balanced combination of social responsibility and profit. Organization with social responsibility policy is more attractive for its stakeholders. The most loaded question is "Social responsibility and profitability can be compatible". Significant effect of this factor is 0.802. — Role of Social Responsibility in the successful organizational performance. Understanding the value of social responsibility and business ethics. Well-being and welfare of the society. The dominant question is "Good ethics is often good business". Significant effect of this factor is 0.727. — Global vision of social responsibility. Issues related to global social responsibility and sustainability. Innovative approaches to poverty reduction. Awareness of climate change problems. Global vision for successful business. The dominant question is "The overall effectiveness of a business can be determined to a great extent by the degree to which it is ethical and socially responsible". Significant effect of this factor is 0.842. The theoretical contribution. The perspective of the study is to develop a tool for measurement social responsibility in organizations and to test questionnaire’s adequacy for social and cultural context. Practical implications. The research results can be applied for designing a training programme for business school students to form their global vision for successful business as well as the ability to solve ethical dilemmas in managerial practice. Researchers interested in social responsibility issues are welcome to join the project.

Keywords: corporate social responsibility, Cronbach’s alpha, ethical behaviour, psychodiagnostic tool

Procedia PDF Downloads 340
66 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor

Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen

Abstract:

In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.

Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.

Procedia PDF Downloads 221
65 An Analysis of Preliminary Intervention for Developing to Promote Resiliency of Children Whose Parents Suffer Mental Illness

Authors: Sookbin Im, Myounglyun Heo

Abstract:

This study aims at analyzing composition and effects of the preliminary intervention to promote resiliency of children whose parents suffer mental illness, and considerations according to the program, and developing the resiliency promotion program for children of psychiatric patients. For participants of preliminary intervention, they were recruited through a community mental health and social welfare center in a city, and there were 10 children (eight girls and two boys) who are from second to five graders in elementary school, and whose parents suffer schizophrenia, depression, or alcoholism, etc. The program was conducted in the seminar room of the community mental illness and social welfare center from October to December 2015 and from July to September 2016. The elements of resiliency were figured out by reviewing the literature. And therapeutic activities to promote resiliency was composed, and total twice, 8 sessions(two hours, once a week) were applied. Each session consisted of playgroup activities, art activities, and role-playing with feedback for achieving goals to promote self-awareness, self-efficacy, positive outlook, ability to solve problems, empathy for others, peer group acceptance, having goals and aspirations, and assertiveness. In addition, auxiliary managers as many as children played a role as mentor and role model, and children's behaviors were collected by participatory observation. As a result of the study, four children quit the program because the schedules of their own school programs were overlapped with it. Therefore, six children completed the program. Children who completed it became active, positive, decreased compulsive actions, and increased self-expressions. The participants reacted the 8-session program is too short and regretted about it. However, recruiting the participants were difficult, and too distracting children caused negative influences in the group activities. Based on the results, the program was developed as follows: The program would consist of total 11 sessions, and the first eight sessions would be made of plays, art activities, role-plays, and presentations for promoting self-understanding, improving positiveness, providing meaning for experiences, emotional control, and interpersonal relations. In order to balance various contents, methods such as structuring environments, storytelling, emotional coaching, and group feedback would be applied, and the ninth to eleventh sessions would be booster sessions consisting of optional activities for children. This program is for children who attend school with active linguistic communications and interactions with peers. Especially, considering that effective development starts at around 10 years old, it would be for children who are third and fourth graders in elementary school. These result showed that this program was useful for improving the key elements of resiliency such as positive thinking or impulse control. It is suggested the necessary of resiliency promoting program model and practical guidance with comprehensive measuring methods(narratives, drawing, self-reported questionnaire, behavioral observation). Also, it is necessary to make a training program for the coaches or leaders to operate this program to spread out for child health.

Keywords: children, mental, parents, resilience

Procedia PDF Downloads 108
64 Strategic Planning Practice in a Global Perspective:the Case of Guangzhou, China

Authors: Shuyi Xie

Abstract:

As the vital city in south China since the ancient time, Guangzhou has been losing its leading role among the rising neighboring cities, especially, Hong Kong and Shenzhen, since the late 1980s, with the overloaded infrastructure and deteriorating urban environment in its old inner city. Fortunately, with the new expansion of its administrative area in 2000, the local municipality considered it as a great opportunity to solve a series of alarming urban problems. Thus, for the first time, strategic planning was introduced to China for providing more convincing and scientific basis towards better urban future. Differed from traditional Chinese planning practices, which rigidly and dogmatically focused on future blueprints, the strategic planning of Guangzhou proceeded from analyzing practical challenges and opportunities towards establishing reasonable developing objectives and proposing corresponding strategies. Moreover, it was pioneering that the municipality invited five planning institutions for proposals, among which, the paper focuses on the one proposed by China Academy of Urban Planning & Design from its theoretical basis to problems’ defining and analyzing the process, as well as planning results. Since it was closer to the following municipal decisions and had a more far-reaching influence for other Chinese cities' following practices. In particular, it demonstrated an innovative exploration on the role played by urban developing rate on deciding urban growth patterns (‘Spillover-reverberation’ or ‘Leapfrog’). That ultimately established an unprecedented paradigm on deciding an appropriate urban spatial structure in future, including its specific location, function and scale. Besides the proposal itself, this article highlights the role of interactions, among actors, as well as proposals, subsequent discussions, summaries and municipal decisions, especially the establishment of the rolling dynamic evaluation system for periodical reviews on implementation situations, as the first attempt in China. Undoubtedly, strategic planning of Guangzhou has brought out considerable benefits, especially opening the strategic mind for plentiful Chinese cities in the following years through establishing a flexible and dynamic planning mechanism highlighted the interactions among multiple actors with innovative and effective tools, methodologies and perspectives on regional, objective-approach and comparative analysis. However, compared with some developed countries, the strategic planning in China just started and has been greatly relied on empirical studies rather than scientific analysis. Moreover, it still faced a bit of controversy, for instance, the certain gap among institutional proposals, final municipal decisions and implemented results, due to the lacking legal constraint. Also, how to improve the public involvement in China with an absolute up-down administrative system is another urgent task. In future, despite of irresistible and irretrievable weakness, some experiences and lessons from previous international practices, with the combination of specific Chinese situations and domestic practices, would enable to promote the further advance on strategic planning in China.

Keywords: evaluation system, global perspective, Guangzhou, interactions, strategic planning, urban growth patterns

Procedia PDF Downloads 363
63 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 380
62 The Impact of Illegal Firearms Possession, Limited Security Staff and Porosity of Border on Human Security in Ipokia Local Government Area, Ogun State

Authors: Ogunmefun Folorunsho Muyideen, Aluko Tolulope Evelyn

Abstract:

One of the trending menaces faced in the world today is centered on the porosity of borders and proliferation of illegal weapons among the state members without the state authorizations. The proliferation of weapons along porous borders remains a germane and unsolvable question among developed and developing nations due to crisis degenerated from the menace (loss of lives, properties, traumatization, civil unrest and retrogressive economic development). A mixed method was adopted while the survey method was used for communities’ selection (Oke-Odan, Ajilete, Illaise, Lanlate) at Ipokia Local Government as a sample frame. Multi-stage sampling was employed to break down the site into wards, streets, and different house numbers before randomizing administration of the questionnaires using face to face method, while purposive sampling was used for collecting verbal information through an in-depth interviews method. The population size for the site is 150.398, while 399 was the sample size derived from the use of Yamane sample size formula. After retrieval of structured questionnaires, 346 were found useful, while 10 percent (399) of the quantitative instruments was summed to 30 participants that were interviewed using the in-depth interviews technique. The result of the first hypothesis shows a composite relationship between the variables tested (independents and dependent). The result indicated that the porosity of the border, illegal possession of guns, and limited security staff jointly predispose insecurity among the residents of the selected study site. The result of the second hypothesis deciphers that the illegal gun possession (independent) variable predict business outcome among the residents of the study site because sporadic gun shoot will regress the business activities in the study area. The result of third result indicated that the independent (porosity of borders) variable predict social bonding network because a high level of insecurity will destroy the level of trust in the communication among the residents of the study area. The last questions give comprehensive meaning to one of the recommendations derived using content systematic analysis, which explains that out of 30 participants interviewed, 18 submitted individual involvement in monitoring communities will solve the problem, 7 out of 30 opines that governmental agents are to be trained for effective combat, 3 participants out 30 submits that the fight is for both government and the citizens while 2 participants out of 30 claimed that there must be an agreement between Nigerian and neighbouring countries on border security. International donors must totally control the sales of weapons to unauthorized personalities. Criminal cases must be treated with deterrence measures and target hardened procedures through decoying and blending, stakeout, and sting tactics.

Keywords: human security, illegal weapons, porous borders, development

Procedia PDF Downloads 147
61 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 156
60 Mechanical and Durability Characteristics of Roller Compacted Geopolymer Concrete Using Recycled Concrete Aggregate

Authors: Syfur Rahman, Mohammad J. Khattak

Abstract:

Every year a huge quantity of recycling concrete aggregate (RCA) is generated in the United States of America. Utilization of RCA can solve the storage problem, prevent environmental pollution, and reduce the construction cost. However, due to the overall low strength and durability characteristics of RCA, its usages are limited to a certain area like a landfill, low strength base material, replacement of a few percentages of virgin aggregates in Portland cement concrete, etc. This study focuses on the improvement of the strength and durability characteristics of RCA by introducing the concept of roller-compacted geopolymer concrete. In this research, developed roller-compacted geopolymer concrete (RCGPC) and roller-compacted cement concrete (RCC) mixtures containing 100% recycled concrete aggregate were evaluated and compared. Several selected RCGPC mixtures were investigated to find out the effect of mixture variables, including sodium hydroxide (NaOH) molar concentration, sodium silicate (Na₂SiO₃), to sodium hydroxide (NaOH) ratio on the strength, stiffness and durability characteristics of the developed RCGPC. Sodium hydroxide (NaOH) and sodium silicate (Na₂SiO₃) were mixed in different ratios to synthesize the alkali activator. American Concrete Pavement Association (ACPA) recommended RCC gradation was used with a maximum nominal aggregate size of 19 mm with a 4% fine particle passing 0.075 mm sieve. The mixtures were made using NaOH molar concentration of 8M and 10M along with, Na₂SiO₃ to NaOH ratio of 0 and 1 by mass and 15% class F fly ash. Optimum alkali content and moisture content were determined for each RCGPC and RCC mixtures, respectively, using modified proctor test. Compressive strength, semi-circular bending beam strength, and dynamic modulus test were conducted to evaluate the mechanistic characteristics of both mixtures. To determine the optimum curing conditions for RCGPC, effects of different curing temperature and curing duration on compressive strength were also studied. Sulphate attack and freeze-thaw tests were also carried out to assess the durability properties of the developed mixtures. X-ray diffraction (XRD) was used for morphology and microstructure analysis. From the optimum moisture content results, it was found that RCGPC has high alkali content, which was mainly due to the high absorption capacity of RCA. It was found that the mixtures with Na₂SiO₃ to NaOH ratio of 1 yielded about 60% higher compressive strength than the ratio of 0. Further, the mixtures using 10M NaOH concentrations and alkali ratio of 1 produced about 28 MPa of compressive strength, which was around 33% higher than 8M NaOH mixtures. Similar results were obtained for elastic and dynamic modulus of the mixtures. On the other hand, the semi-circular bending beam strength remained the same for both 8 and 10 molar NaOH geopolymer mixtures. Formation of new geopolymeric compounds and chemical bonds in the newly formed novel RCGPC mixtures were also discovered using XRD analysis. The results of mechanical and durability testing further revealed that RCGPC performed similarly to that of RCC mixtures. Based on the results of mechanical and durability testing, the developed RCGPC mixtures using 100% recycled concrete could be used as a cost-effective solution for the construction of pavement structures.

Keywords: roller compacted concrete, geopolymer concrete, recycled concrete aggregate, concrete pavement, fly ash

Procedia PDF Downloads 117
59 Seismic Analysis of Vertical Expansion Hybrid Structure by Response Spectrum Method Concern with Disaster Management and Solving the Problems of Urbanization

Authors: Gautam, Gurcharan Singh, Mandeep Kaur, Yogesh Aggarwal, Sanjeev Naval

Abstract:

The present ground reality scenario of suffering of humanity shows the evidence of failure to take wrong decisions to shape the civilization with Irresponsibilities in the history. A strong positive will of right responsibilities make the right civilization structure which affects itself and the whole world. Present suffering of humanity shows and reflect the failure of past decisions taken to shape the true culture with right social structure of society, due to unplanned system of Indian civilization and its rapid disaster of population make the failure to face all kind of problems which make the society sufferer. Our India is still suffering from disaster like earthquake, floods, droughts, tsunamis etc. and we face the uncountable disaster of deaths from the beginning of humanity at the present time. In this research paper our focus is to make a Disaster Resistance Structure having the solution of dense populated urban cities area by high vertical expansion HYBRID STRUCTURE. Our efforts are to analyse the Reinforced Concrete Hybrid Structure at different seismic zones, these concrete frames were analyzed using the response spectrum method to calculate and compare the different seismic displacement and drift. Seismic analysis by this method generally is based on dynamic analysis of building. Analysis results shows that the Reinforced Concrete Building at seismic Zone V having maximum peak story shear, base shear, drift and node displacement as compare to the analytical results of Reinforced Concrete Building at seismic Zone III and Zone IV. This analysis results indicating to focus on structural drawings strictly at construction site to make a HYBRID STRUCTURE. The study case is deal with the 10 story height of a vertical expansion Hybrid frame structure at different zones i.e. zone III, zone IV and zone V having the column 0.45x0.36mt and beam 0.6x0.36mt. with total height of 30mt, to make the structure more stable bracing techniques shell be applied like mage bracing and V shape bracing. If this kind of efforts or structure drawings are followed by the builders and contractors then we save the lives during earthquake disaster at Bhuj (Gujarat State, India) on 26th January, 2001 which resulted in more than 19,000 deaths. This kind of Disaster Resistance Structure having the capabilities to solve the problems of densely populated area of cities by the utilization of area in vertical expansion hybrid structure. We request to Government of India to make new plans and implementing it to save the lives from future disasters instead of unnecessary wants of development plans like Bullet Trains.

Keywords: history, irresponsibilities, unplanned social structure, humanity, hybrid structure, response spectrum analysis, DRIFT, and NODE displacement

Procedia PDF Downloads 178
58 Community Participation and Place Identity as Mediators on the Impact of Resident Social Capital on Support Intention for Festival Tourism

Authors: Nien-Te Kuo, Yi-Sung Cheng, Kuo-Chien Chang

Abstract:

Cultural festival tourism is now seen by many as an opportunity to facilitate community development because it has significant influences on the economic, social, cultural, and political aspects of local communities. The potential for tourist attraction has been recognized as a useful tool to strengthen local economies from governments. However, most community festivals in Taiwan are short-lived, often only lasting for a few years or occasionally not making it past a one-off event. Researchers suggested that most governments and other stakeholders do not recognize the importance of building a partnership with residents when developing community tourism. Thus, the sustainable community tourism development still remains a key issue in the existing literature. The success of community tourism is related to the attitudes and lifestyles of local residents. In order to maintain sustainable tourism, residents need to be seen as development partners. Residents’ support intention for tourism development not only helps to increase awareness of local culture, history, the natural environment, and infrastructure, but also improves the interactive relationship between the host community and tourists. Furthermore, researchers have identified the social capital theory as the core of sustainable community tourism development. The social capital of residents has been seen as a good way to solve issues of tourism governance, forecast the participation behavior and improve support intention of residents. In addition, previous studies have pointed out the role of community participation and place identity in increasing resident support intention for tourism development. A lack of place identity is one of the main reasons that community tourism has become a mere formality and is not sustainable. It refers to how much residents participate during tourism development and is mainly influenced by individual interest. Scholars believed that the place identity of residents is the soul of community festivals. It shows the community spirit to visitors and has significant impacts on tourism benefits and support intention of residents in community tourism development. Although the importance of community participation and place identity have been confirmed by both governmental and non-governmental organizations, real-life execution still needs to be improved. This study aimed to use social capital theory to investigate the social structure between community residents, participation levels in festival tourism, degrees of place identity, and resident support intention for future community tourism development, and the causal relationship that these factors have with cultural festival tourism. A quantitative research approach was employed to examine the proposed model. Structural equation model was used to test and verify the proposed hypotheses. This was a case study of the Kaohsiung Zuoying Wannian Folklore Festival. The festival was located in the Zuoying District of Kaohsiung City, Taiwan. The target population of this study was residents who attended the festival. The results reveal significant correlations among social capital, community participation, place identity and support intention. The results also confirm that impacts of social capital on support intention were significantly mediated by community participation and place identity. Practical suggestions were provided for tourism operators and policy makers. This work was supported by the Ministry of Science and Technology of Taiwan, Republic of China, under the grant MOST-105-2410-H-328-013.

Keywords: community participation, place identity, social capital, support intention

Procedia PDF Downloads 304
57 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 55
56 Design and Synthesis of an Organic Material with High Open Circuit Voltage of 1.0 V

Authors: Javed Iqbal

Abstract:

The growing need for energy by the human society and depletion of conventional energy sources demands a renewable, safe, infinite, low-cost and omnipresent energy source. One of the most suitable ways to solve the foreseeable world’s energy crisis is to use the power of the sun. Photovoltaic devices are especially of wide interest as they can convert solar energy to electricity. Recently the best performing solar cells are silicon-based cells. However, silicon cells are expensive, rigid in structure and have a large timeline for the payback of cost and electricity. Organic photovoltaic cells are cheap, flexible and can be manufactured in a continuous process. Therefore, organic photovoltaic cells are an extremely favorable replacement. Organic photovoltaic cells utilize sunlight as energy and convert it into electricity through the use of conductive polymers/ small molecules to separate electrons and electron holes. A major challenge for these new organic photovoltaic cells is the efficiency, which is low compared with the traditional silicon solar cells. To overcome this challenge, usually two straightforward strategies have been considered: (1) reducing the band-gap of molecular donors to broaden the absorption range, which results in higher short circuit current density (JSC) of devices, and (2) lowering the highest occupied molecular orbital (HOMO) energy of molecular donors so as to increase the open-circuit voltage (VOC) of applications devices.8 Keeping in mind the cost of chemicals it is hard to try many materials on test basis. The best way is to find the suitable material in the bulk. For this purpose, we use computational approach to design molecules based on our organic chemistry knowledge and determine their physical and electronic properties. In this study, we did DFT calculations with different options to get high open circuit voltage and after getting suitable data from calculation we finally did synthesis of a novel D–π–A–π–D type low band-gap small molecular donor material (ZOPTAN-TPA). The Aarylene vinylene based bis(arylhalide) unit containing a cyanostilbene unit acts as a low-band- gap electron-accepting block, and is coupled with triphenylamine as electron-donating blocks groups. The motivation for choosing triphenylamine (TPA) as capped donor was attributed to its important role in stabilizing the separated hole from an exciton and thus improving the hole-transporting properties of the hole carrier.3 A π-bridge (thiophene) is inserted between the donor and acceptor unit to reduce the steric hindrance between the donor and acceptor units and to improve the planarity of the molecule. The ZOPTAN-TPA molecule features a low HOMO level of 5.2 eV and an optical energy gap of 2.1 eV. Champion OSCs based on a solution-processed and non-annealed active-material blend of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) and ZOPTAN-TPA in a mass ratio of 2:1 exhibits a power conversion efficiency of 1.9 % and a high open-circuit voltage of over 1.0 V.

Keywords: high open circuit voltage, donor, triphenylamine, organic solar cells

Procedia PDF Downloads 222
55 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin

Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven

Abstract:

The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).

Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts

Procedia PDF Downloads 313
54 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 58
53 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 102
52 Virtual Reality Applications for Building Indoor Engineering: Circulation Way-Finding

Authors: Atefeh Omidkhah Kharashtomi, Rasoul Hedayat Nejad, Saeed Bakhtiyari

Abstract:

Circulation paths and indoor connection network of the building play an important role both in the daily operation of the building and during evacuation in emergency situations. The degree of legibility of the paths for navigation inside the building has a deep connection with the perceptive and cognitive system of human, and the way the surrounding environment is being perceived. Human perception of the space is based on the sensory systems in a three-dimensional environment, and non-linearly, so it is necessary to avoid reducing its representations in architectural design as a two-dimensional and linear issue. Today, the advances in the field of virtual reality (VR) technology have led to various applications, and architecture and building science can benefit greatly from these capabilities. Especially in cases where the design solution requires a detailed and complete understanding of the human perception of the environment and the behavioral response, special attention to VR technologies could be a priority. Way-finding in the indoor circulation network is a proper example for such application. Success in way-finding could be achieved if human perception of the route and the behavioral reaction have been considered in advance and reflected in the architectural design. This paper discusses the VR technology applications for the way-finding improvements in indoor engineering of the building. In a systematic review, with a database consisting of numerous studies, firstly, four categories for VR applications for circulation way-finding have been identified: 1) data collection of key parameters, 2) comparison of the effect of each parameter in virtual environment versus real world (in order to improve the design), 3) comparing experiment results in the application of different VR devices/ methods with each other or with the results of building simulation, and 4) training and planning. Since the costs of technical equipment and knowledge required to use VR tools lead to the limitation of its use for all design projects, priority buildings for the use of VR during design are introduced based on case-studies analysis. The results indicate that VR technology provides opportunities for designers to solve complex buildings design challenges in an effective and efficient manner. Then environmental parameters and the architecture of the circulation routes (indicators such as route configuration, topology, signs, structural and non-structural components, etc.) and the characteristics of each (metrics such as dimensions, proportions, color, transparency, texture, etc.) are classified for the VR way-finding experiments. Then, according to human behavior and reaction in the movement-related issues, the necessity of scenario-based and experiment design for using VR technology to improve the design and receive feedback from the test participants has been described. The parameters related to the scenario design are presented in a flowchart in the form of test design, data determination and interpretation, recording results, analysis, errors, validation and reporting. Also, the experiment environment design is discussed for equipment selection according to the scenario, parameters under study as well as creating the sense of illusion in the terms of place illusion, plausibility and illusion of body ownership.

Keywords: virtual reality (VR), way-finding, indoor, circulation, design

Procedia PDF Downloads 50
51 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 77