Search results for: game engine
131 Tribological Behavior of Hybrid Nanolubricants for Internal Combustion Engines
Authors: José M. Liñeira Del Río, Ramón Rial, Khodor Nasser, María J.G. Guimarey
Abstract:
The need to develop new lubricants that offer better anti-friction and anti-wear performance in internal combustion vehicles is one of the great challenges of lubrication in the automotive field. The addition of nanoparticles has emerged as a possible solution and, combined with the lubricating power of ionic liquids, may become one of the alternatives to reduce friction losses and wear of the contact surfaces in the conditions to which tribo-pairs are subjected, especially in the contact of the piston rings and the cylinder liner surface. In this study, the improvement in SAE 10W-40 engine oil tribological performance after the addition of magnesium oxide (MgO) nanoadditives and two different phosphonium-based ionic liquids (ILs) was investigated. The nanoparticle characterization was performed by means of transmission electron microscopy (TEM), Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, X-ray diffraction (XRD), and scanning electron microscopy (SEM). The tribological properties, friction coefficients and wear parameters of the formulated oil modified with 0.01 wt.% MgO and 1 wt.% ILs compared with the neat 10W-40 oil were performed and analyzed using a ball-on-three-pins tribometer and a 3D optical profilometer, respectively. Further analysis on the worn surface was carried out by Raman spectroscopy and SEM microscopy, illustrating the formation of the protective IL and MgO tribo-films as hybrid additives. In friction tests with sliding steel-steel tribo-pairs, IL3-based hybrid nanolubricant decreased the friction coefficient and wear volume by 7% and 59%, respectively, in comparison with the neat SAE 10W-40, while the one based on IL1 only achieved a reduction of these parameters by 6% and 39%, respectively. Thus, the tribological characterization also revealed that the MgO and IL3 addition has a positive synergy over the commercial lubricant, adequately meeting the requirements for their use in internal combustion engines. In summary, this study has shown that the addition of ionic liquids to MgO nanoparticles can improve the stability and lubrication behavior of MgO nanolubricant and encourages more investigations on using nanoparticle additives with green solvents such as ionic liquids to protect the environment as well as prolong the lifetime of machinery. The improvement in the lubricant properties was attributed to the following wear mechanisms: the formation of a protective tribo-film and the ability of nanoparticles to fill out valleys between asperities, thereby effectively smoothing out the shearing surfaces.Keywords: lubricant, nanoparticles, phosphonium-based ionic liquids, tribology
Procedia PDF Downloads 82130 A Method and System for Secure Authentication Using One Time QR Code
Authors: Divyans Mahansaria
Abstract:
User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.Keywords: authentication, QR code, cipher / decipher text, one time password, secret information
Procedia PDF Downloads 268129 Private Technology Parks–The New Engine for Innovation Development in Russia
Authors: K. Volkonitskaya, S. Lyapina
Abstract:
According to the National Monitoring Centre of innovation infrastructure, scientific and technical activities and regional innovation systems by December 2014. 166 technology parks were established in Russia. Comparative analysis of technological parks performance in Russia, the USA, Israel and the European Union countries revealed significant reduction of key performance indicators in Russian innovation infrastructure institutes. The largest deviations were determined in the following indicators: new products and services launched, number of companies and jobs, amount of venture capital invested. Lower performance indicators of Russian technology parks can be partly explained by slack demand for national high-tech products and services, lack of qualified specialists in the sphere of innovation management and insufficient cooperation between different innovation infrastructure institutes. In spite of all constraints in innovation segment of Russian economy in 2010-2012 private investors for the first time proceeded to finance building of technological parks. The general purpose of the research is to answer two questions: why despite the significant investment risks private investors continue to implement such comprehensive infrastructure projects in Russia and is business model of private technological park more efficient than strategies of state innovation infrastructure institutes? The goal of the research was achieved by analyzing business models of private technological parks in Moscow, Kaliningrad, Astrakhan and Kazan. The research was conducted in two stages: the on-line survey of key performance indicators of private and state Russian technological parks and in-depth interviews with top managers and investors, who have already build private technological parks in by 2014 or are going to complete investment stage in 2014-2016. The results anticipated are intended to identify the reasons of efficient and inefficient technological parks performance. Furthermore, recommendations for improving the efficiency of state technological and industrial parks were formulated. Particularly, the recommendations affect the following issues: networking with other infrastructural institutes, services and infrastructure provided, mechanisms of public-private partnership and investment attraction. In general intensive study of private technological parks performance and development of effective mechanisms of state support can have a positive impact on the growth rates of the number of Russian technological, industrial and science parks.Keywords: innovation development, innovation infrastructure, private technology park, public-private partnership
Procedia PDF Downloads 436128 Online Course of Study and Job Crafting for University Students: Development Work and Feedback
Authors: Hannele Kuusisto, Paivi Makila, Ursula Hyrkkanen
Abstract:
Introduction: There have been arguments about the skills university students should have when graduated. Current trends argue that as well as the specific job-related skills the graduated students need problem-solving, interaction and networking skills as well as self-management skills. Skills required in working life are also considered in the Finnish national project called VALTE (short for 'prepared for working life'). The project involves 11 Finnish school organizations. As one result of this project, a five-credit independent online course in study and job engagement as well as in study and job crafting was developed at Turku University of Applied Sciences. The aim of the oral or e-poster presentation is to present the online course developed in the project. The purpose of this abstract is to present the development work of the online course and the feedback received from the pilots. Method: As the University of Turku is the leading partner of the VALTE project, the collaborative education platform ViLLE (https://ville.utu.fi, developed by the University of Turku) was chosen as the online platform for the course. Various exercise types with automatic assessment were used; for example, quizzes, multiple-choice questions, classification exercises, gap filling exercises, model answer questions, self-assessment tasks, case tasks, and collaboration in Padlet. In addition, the free material and free platforms on the Internet were used (Youtube, Padlet, Todaysmeet, and Prezi) as well as the net-based questionnaires about the study engagement and study crafting (made with Webropol). Three teachers with long teaching experience (also with job crafting and online pedagogy) and three students working as trainees in the project developed the content of the course. The online course was piloted twice in 2017 as an elective course for the students at Turku University of Applied Sciences, a higher education institution of about 10 000 students. After both pilots, feedback from the students was gathered and the online course was developed. Results: As the result, the functional five-credit independent online course suitable for students of different educational institutions was developed. The student feedback shows that students themselves think that the developed online course really enhanced their job and study crafting skills. After the course, 91% of the students considered their knowledge in job and study engagement as well as in job and study crafting to be at a good or excellent level. About two-thirds of the students were going to exploit their knowledge significantly in the future. Students appreciated the variability and the game-like feeling of the exercises as well as the opportunity to study online at the time and place they chose themselves. On a five-point scale (1 being poor and 5 being excellent), the students graded the clarity of the ViLLE platform as 4.2, the functionality of the platform as 4.0 and the easiness of operating as 3.9.Keywords: job crafting, job engagement, online course, study crafting, study engagement
Procedia PDF Downloads 153127 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model
Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro
Abstract:
The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion
Procedia PDF Downloads 387126 The Effects of Collaborative Videogame Play on Flow Experience and Mood
Authors: Eva Nolan, Timothy Mcnichols
Abstract:
Gamers spend over 3 billion hours collectively playing video games a week, which is arguably not nearly enough time to indulge in the many benefits gaming has to offer. Much of the previous research on video gaming is centered on the effects of playing violent video games and the negative impacts they have on the individual. However, there is a dearth of research in the area of non-violent video games, specifically the emotional and cognitive benefits playing non-violent games can offer individuals. Current research in the area of video game play suggests there are many benefits to playing for an individual, such as decreasing symptoms of depression, decreasing stress, increasing positive emotions, inducing relaxation, decreasing anxiety, and particularly improving mood. One suggestion as to why video games may offer such benefits is that they possess ideal characteristics to create and maintain flow experiences, which in turn, is the subjective experience where an individual obtains a heightened and improved state of mind while they are engaged in a task where a balance of challenge and skill is found. Many video games offer a platform for collaborative gameplay, which can enhance the emotional experience of gaming through the feeling of social support and social inclusion. The present study was designed to examine the effects of collaborative gameplay and flow experience on participants’ perceived mood. To investigate this phenomenon, an in-between subjects design involving forty participants were randomly divided into two groups where they engaged in solo or collaborative gameplay. Each group represented an even number of frequent gamers and non-frequent gamers. Each participant played ‘The Lego Movie Videogame’ on the Playstation 4 console. The participant’s levels of flow experience and perceived mood were measured by the Flow State Scale (FSS) and the Positive and Negative Affect Schedule (PANAS). The following research hypotheses were investigated: (i.) participants in the collaborative gameplay condition will experience higher levels of flow experience and higher levels of mood than those in the solo gameplay condition; (ii.) participants who are frequent gamers will experience higher levels of flow experience and higher levels of mood than non-frequent gamers; and (iii.) there will be a significant positive relationship between flow experience and mood. If the estimated findings are supported, this suggests that engaging in collaborative gameplay can be beneficial for an individual’s mood and that experiencing a state of flow can also enhance an individual’s mood. Hence, collaborative gaming can be beneficial to promote positive emotions (higher levels of mood) through engaging an individual’s flow state.Keywords: collaborative gameplay, flow experience, mood, games, positive emotions
Procedia PDF Downloads 335125 Preferences of Electric Buses in Public Transport; Conclusions from Real Life Testing in Eight Swedish Municipalities
Authors: Sven Borén, Lisiana Nurhadi, Henrik Ny
Abstract:
From a theoretical perspective, electric buses can be more sustainable and can be cheaper than fossil fuelled buses in city traffic. The authors have not found other studies based on actual urban public transport in Swedish winter climate. Further on, noise measurements from buses for the European market were found old. The aims of this follow-up study was therefore to test and possibly verify in a real-life environment how energy efficient and silent electric buses are, and then conclude on if electric buses are preferable to use in public transport. The Ebusco 2.0 electric bus, fitted with a 311 kWh battery pack, was used and the tests were carried out during November 2014-April 2015 in eight municipalities in the south of Sweden. Six tests took place in urban traffic and two took place in more of a rural traffic setting. The energy use for propulsion was measured via logging of the internal system in the bus and via an external charging meter. The average energy use turned out to be 8% less (0,96 kWh/km) than assumed in the earlier theoretical study. This rate allows for a 320 km range in public urban traffic. The interior of the bus was kept warm by a diesel heater (biodiesel will probably be used in a future operational traffic situation), which used 0,67 kWh/km in January. This verified that electric buses can be up to 25% cheaper when used in public transport in cities for about eight years. The noise was found to be lower, primarily during acceleration, than for buses with combustion engines in urban bus traffic. According to our surveys, most passengers and drivers appreciated the silent and comfortable ride and preferred electric buses rather than combustion engine buses. Bus operators and passenger transport executives were also positive to start using electric buses for public transport. The operators did however point out that procurement processes need to account for eventual risks regarding this new technology, along with personnel education. The study revealed that it is possible to establish a charging infrastructure for almost all studied bus lines. However, design of a charging infrastructure for each municipality requires further investigations, including electric grid capacity analysis, smart location of charging points, and tailored schedules to allow fast charging. In conclusion, electric buses proved to be a preferable alternative for all stakeholders involved in public bus transport in the studied municipalities. However, in order to electric buses to be a prominent support for sustainable development, they need to be charged either by stand-alone units or via an expansion of the electric grid, and the electricity should be made from new renewable sources.Keywords: sustainability, electric, bus, noise, greencharge
Procedia PDF Downloads 342124 Systemic Functional Linguistics in the Rhetorical Strategies of Persuasion: A Longitudinal Study of Transitivity and Ergativity in the Rhetoric of Saras’ Sustainability Reports
Authors: Antonio Piga
Abstract:
This study explores the correlation between Systemic Functional Linguistics (SFL) and Critical Discourse Analysis (CDA) as tools for analysing the evolution of rhetoric in the communicative strategies adopted in a company’s Reports on social and environmental responsibility. In more specific terms, transitivity and ergativity- concepts from Systemic Functional Linguistics (SFL) - through the lenses of CDA, are employed as a theoretical means for the analysis of a longitudinal study in the communicative strategies employed by Saras SpA pre- and during the Covid-19 pandemic crisis. Saras is an Italian joint-stock company operating in oil refining and power generation. The qualitative and quantitative linguistic analysis carried out through the use of Sketch Engine software aims to identify and explain how rhetoric - and ideology - is constructed and presented through language use in Saras SpA Sustainability Reports. Specific focus is given to communication strategies to local and global communities and stakeholders in the years immediately before and during the Covid-19 pandemic. The rationale behind the study lies in the fact that 2020 and 2021 have been among the most difficult years since the end of World War II. Lives were abruptly turned upside down by the pandemic, which had grave negative effects on people’s health and on the economy. The result has been a threefold crisis involving health, the economy and social tension, with the refining sector being one of the hardest hit, since the oil refining industry was one of the most affected industries due to the general reduction in mobility and oil consumption brought about by the virus-fighting measures. Emphasis is placed on the construction of rhetorical strategies pre- and during the pandemic crisis using the representational process of transitivity and ergativity (SFL), thus revealing the close relationship between the use language in terms of Social Actors and semantic roles of syntactic transformation on the one hand, and ideological assumptions on the other. The results show that linguistic decisions regarding transitivity and ergativity choices play a crucial role in how effective writing achieves its rhetorical objectives in terms of spreading and maintaining dominant and implicit ideologies and underlying persuasive actions, and that some ideological motivation is perpetuated – if not actually overtly or subtly strengthened - in social-environmental Reports issued in the midst of the Covid-19 pandemic crisis.Keywords: systemic functional linguistics, sustainability, critical discourse analysis, transitivity, ergativity
Procedia PDF Downloads 107123 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 212122 Risk Based Maintenance Planning for Loading Equipment in Underground Hard Rock Mine: Case Study
Authors: Sidharth Talan, Devendra Kumar Yadav, Yuvraj Singh Rajput, Subhajit Bhattacharjee
Abstract:
Mining industry is known for its appetite to spend sizeable capital on mine equipment. However, in the current scenario, the mining industry is challenged by daunting factors of non-uniform geological conditions, uneven ore grade, uncontrollable and volatile mineral commodity prices and the ever increasing quest to optimize the capital and operational costs. Thus, the role of equipment reliability and maintenance planning inherits a significant role in augmenting the equipment availability for the operation and in turn boosting the mine productivity. This paper presents the Risk Based Maintenance (RBM) planning conducted on mine loading equipment namely Load Haul Dumpers (LHDs) at Vedanta Resources Ltd subsidiary Hindustan Zinc Limited operated Sindesar Khurd Mines, an underground zinc and lead mine situated in Dariba, Rajasthan, India. The mining equipment at the location is maintained by the Original Equipment Manufacturers (OEMs) namely Sandvik and Atlas Copco, who carry out the maintenance and inspection operations for the equipment. Based on the downtime data extracted for the equipment fleet over the period of 6 months spanning from 1st January 2017 until 30th June 2017, it was revealed that significant contribution of three downtime issues related to namely Engine, Hydraulics, and Transmission to be common among all the loading equipment fleet and substantiated by Pareto Analysis. Further scrutiny through Bubble Matrix Analysis of the given factors revealed the major influence of selective factors namely Overheating, No Load Taken (NTL) issues, Gear Changing issues and Hose Puncture and leakage issues. Utilizing the equipment wise analysis of all the downtime factors obtained, spares consumed, and the alarm logs extracted from the machines, technical design changes in the equipment and pre shift critical alarms checklist were proposed for the equipment maintenance. The given analysis is beneficial to allow OEMs or mine management to focus on the critical issues hampering the reliability of mine equipment and design necessary maintenance strategies to mitigate them.Keywords: bubble matrix analysis, LHDs, OEMs, Pareto chart analysis, spares consumption matrix, critical alarms checklist
Procedia PDF Downloads 153121 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck
Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu
Abstract:
In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption
Procedia PDF Downloads 138120 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles
Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan
Abstract:
Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks
Procedia PDF Downloads 54119 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 373118 Assessment on the Conduct of Arnis Competition in Pasuc National Olympics 2015: Basis for Improvement of Rules in Competition
Authors: Paulo O. Motita
Abstract:
The Philippine Association of State Colleges and University (PASUC) is an association of State owned and operated higher learning institutions in the Philippines, it is the association that spearhead the conduct of the Annual National Athletic competitions for State Colleges and Universities and Arnis is one of the regular sports. In 2009, Republic Act 9850 also known as declared Arnis as the National Sports and Martial arts of the Philippines. Arnis an ancient Filipino Martial Arts is the major sports in the Annual Palarong Pambansa and other school based sports events. The researcher as a Filipino Martial Arts master and a former athlete desired to determine the extent of acceptability of the arnis rules in competition which serves as the basis for the development of arnis rules. The study aimed to assess the conduct of Arnis competition in PASUC Olympics 2015 in Tugegarao City, Cagayan, Philippines.the rules and conduct itself as perceived by Officiating officials, Coaches and Athletes during the competition last February 7-15, 2015. The descriptive method of research was used, the survey questionnaire as the data gathering instrument was validated. The respondents were composed of 12 Officiating officials, 19 coaches and 138 athletes representing the different regions. Their responses were treated using the Mean, Percentage and One-way Analysis of Variance. The study revealed that the conduct of Arnis competition in PASUC Olympics 2015 was at the low extent to moderate extent as perceived by the three groups of respondents in terms of officiating, scoring and giving violations. Furthermore there is no significant difference in the assessment of the three groups of respondents in the assessment of Anyo and Labanan. Considering the findings of the study, the following conclusions were drawn: 1). There is a need to identify the criteria for judging in Anyo and a tedious scrutiny on the rules of the game for labanan. 2) The three groups of respondents have similar views towards the assessment on the overall competitions for anyo that there were no clear technical guidelines for judging the performance of anyo event. 3). The three groups of respondents have similar views towards the assessment on the overall competitions for labanan that there were no clear technical guidelines for majority rule of giving scores in labanan. 4) The Anyo performance should be rated according to effectiveness of techniques and performance of weapon/s that are being used. 5) On other issues and concern towards the rules of competitions, labanan should be addressed in improving rules of competitions, focus on the applications of majority rules for scoring, players shall be given rest interval, a clear guidelines and set a standard qualifications for officiating officials.Keywords: PASUC Olympics 2015, Arnis rules of competition, Anyo, Labanan, officiating
Procedia PDF Downloads 458117 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108116 Hypersonic Propulsion Requirements for Sustained Hypersonic Flight for Air Transportation
Authors: James Rate, Apostolos Pesiridis
Abstract:
In this paper, the propulsion requirements required to achieve sustained hypersonic flight for commercial air transportation are evaluated. In addition, a design methodology is developed and used to determine the propulsive capabilities of both ramjet and scramjet engines. Twelve configurations are proposed for hypersonic flight using varying combinations of turbojet, turbofan, ramjet and scramjet engines. The optimal configuration was determined based on how well each of the configurations met the projected requirements for hypersonic commercial transport. The configurations were separated into four sub-configurations each comprising of three unique derivations. The first sub-configuration comprised four afterburning turbojets and either one or two ramjets idealised for Mach 5 cruise. The number of ramjets required was dependent on the thrust required to accelerate the vehicle from a speed where the turbojets cut out to Mach 5 cruise. The second comprised four afterburning turbojets and either one or two scramjets, similar to the first configuration. The third used four turbojets, one scramjet and one ramjet to aid acceleration from Mach 3 to Mach 5. The fourth configuration was the same as the third, but instead of turbojets, it implemented turbofan engines for the preliminary acceleration of the vehicle. From calculations which determined the fuel consumption at incremental Mach numbers this paper found that the ideal solution would require four turbojet engines and two Scramjet engines. The ideal mission profile was determined as being an 8000km sortie based on an averaging of popular long haul flights with strong business ties, which included Los Angeles to Tokyo, London to New York and Dubai to Beijing. This paper deemed that these routes would benefit from hypersonic transport links based on the previously mentioned factors. This paper has found that this configuration would be sufficient for the 8000km flight to be completed in approximately two and a half hours and would consume less fuel than Concord in doing so. However, this propulsion configuration still result in a greater fuel cost than a conventional passenger. In this regard, this investigation contributes towards the specification of the engine requirements throughout a mission profile for a hypersonic passenger vehicle. A number of assumptions have had to be made for this theoretical approach but the authors believe that this investigation lays the groundwork for appropriate framing of the propulsion requirements for sustained hypersonic flight for commercial air transportation. Despite this, it does serve as a crucial step in the development of the propulsion systems required for hypersonic commercial air transportation. This paper provides a methodology and a focus for the development of the propulsion systems that would be required for sustained hypersonic flight for commercial air transportation.Keywords: hypersonic, ramjet, propulsion, Scramjet, Turbojet, turbofan
Procedia PDF Downloads 320115 A Non-Invasive Method for Assessing the Adrenocortical Function in the Roan Antelope (Hippotragus equinus)
Authors: V. W. Kamgang, A. Van Der Goot, N. C. Bennett, A. Ganswindt
Abstract:
The roan antelope (Hippotragus equinus) is the second largest antelope species in Africa. These past decades, populations of roan antelope are declining drastically throughout Africa. This situation resulted in the development of intensive breeding programmes for this species in Southern African, where they are popular game ranching herbivores in with increasing numbers in captivity. Nowadays, avoidance of stress is important when managing wildlife to ensure animal welfare. In this regard, a non-invasive approach to monitor the adrenocortical function as a measure of stress would be preferable, since animals are not disturbed during sample collection. However, to date, a non-invasive method has not been established for the roan antelope. In this study, we validated a non-invasive technique to monitor the adrenocortical function in this species. Herein, we performed an adrenocorticotropic hormone (ACTH) stimulation test at Lapalala reserve Wilderness, South Africa, using adult captive roan antelopes to determine the stress-related physiological responses. Two individually housed roan antelope (a male and a female) received an intramuscular injection with Synacthen depot (Norvatis) loaded into a 3ml syringe (Pneu-Dart) at an estimated dose of 1 IU/kg. A total number of 86 faecal samples (male: 46, female: 40) were collected 5 days before and 3 days post-injection. All samples were then lyophilised, pulverized and extracted with 80% ethanol (0,1g/3ml) and the resulting faecal extracts were analysed for immunoreactive faecal glucocorticoid metabolite (fGCM) concentrations using five enzyme immunoassays (EIAs); (i) 11-oxoaetiocholanolone I (detecting 11,17 dioxoandrostanes), (ii) 11-oxoaetiocholanolone II (detecting fGCM with a 5α-pregnane-3α-ol-11one structure), (iii) a 5α-pregnane-3β-11β,21-triol-20-one (measuring 3β,11β-diol CM), (iv) a cortisol and (v) a corticosterone. In both animals, all EIAs detected an increase in fGCM concentration 100% post-ACTH administration. However, the 11-oxoaetiocholanolone I EIA performed best, with a 20-fold increase in the male (baseline: 0.384 µg/g, DW; peak: 8,585 µg/g DW) and a 17-fold in the female (baseline: 0.323 µg/g DW, peak: 7,276 µg/g DW), measured 17 hours and 12 hours post-administration respectively. These results are important as the ability to assess adrenocortical function non-invasively in roan can now be used as an essential prerequisite to evaluate the effects of stressful circumstances; such as variation of environmental conditions or reproduction in other to improve management strategies for the conservation of this iconic antelope species.Keywords: adrenocorticotropic hormone challenge, adrenocortical function, captive breeding, non-invasive method, roan antelope
Procedia PDF Downloads 145114 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure
Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano
Abstract:
Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security
Procedia PDF Downloads 165113 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 176112 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 59111 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution
Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda
Abstract:
This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation
Procedia PDF Downloads 147110 Mechanisms Underlying the Effects of School-Based Internet Intervention for Alcohol Drinking Behaviours among Chinese Adolescent
Authors: Keith T. S. Tung, Frederick K. Ho, Rosa S. Wong, Camilla K. M. Lo, Wilfred H. S. Wong, C. B. Chow, Patrick Ip
Abstract:
Objectives: Underage drinking is an important public health problem both locally and globally. Conventional prevention/intervention relies on unidirectional knowledge transfer such as mail leaflets or health talks which showed mixed results in changing the target behaviour. Previously, we conducted a school internet-based intervention which was found to be effective in reducing alcohol use among adolescents, yet the underlying mechanisms have not been properly investigated. This study, therefore, examined the mechanisms that explain how the intervention produced a change in alcohol drinking behaviours among Chinese adolescent as observed in our previous clustered randomised controlled trial (RCT) study. Methods: This is a cluster randomised controlled trial with parallel group design. Participating schools were randomised to the Internet intervention or the conventional health education group (control) with a 1:1 allocation ratio. Secondary 1–3 students of the participating schools were enrolled in this study. The Internet intervention was a web-based quiz game competition, in which participating students would answer 1,000 alcohol-related multiple-choice quiz questions. Conventional health education group received a promotional package on equivalent alcohol-related knowledge. The participants’ alcohol-related attitude, knowledge, and perceived behavioural control were self-reported before the intervention (baseline) and one month and three months after the intervention. Results: Our RCT results showed that participants in the Internet group were less likely to drink (risk ratio [RR] 0.79, p < 0.01) as well as in lesser amount (β -0.06, p < 0.05) compared to those in the control group at both post-intervention follow-ups. Within the intervention group, regression analyses showed that high quiz scorer had greater improvement in alcohol-related knowledge (β 0.28, p < 0.01) and attitude (β -0.26, p < 0.01) at 1 month after intervention, which in turn increased their perceived behavioural control against alcohol use (β 0.10 and -0.26, both p < 0.01). Attitude, compared to knowledge, was found to be a stronger contributor to the intervention effect on perceived behavioural control. Conclusions: Our internet-based intervention has demonstrated effectiveness in reducing the risk of underage drinking when compared with conventional health education. Our study results further showed an attitude to be a more important factor than knowledge in changing health-related behaviour. This has an important implication for future prevention/intervention on an underage drinking problem.Keywords: adolescents, internet-based intervention, randomized controlled trial, underage drinking
Procedia PDF Downloads 164109 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners
Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani
Abstract:
Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks
Procedia PDF Downloads 309108 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions
Authors: Mingming Leng
Abstract:
Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.Keywords: bargaining, game theory, pricing, quality, supply chain
Procedia PDF Downloads 277107 User Experience in Relation to Eye Tracking Behaviour in VR Gallery
Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski
Abstract:
Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication
Procedia PDF Downloads 42106 Gear Fault Diagnosis Based on Optimal Morlet Wavelet Filter and Autocorrelation Enhancement
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Condition monitoring is used to increase machinery availability and machinery performance, whilst reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient condition monitoring system provides early warning of faults by predicting them at an early stage. When a localized fault occurs in gears, the vibration signals always exhibit non-stationary behavior. The periodic impulsive feature of the vibration signal appears in the time domain and the corresponding gear mesh frequency (GMF) emerges in the frequency domain. However, one limitation of frequency-domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur. Particularly at the early stage of gear failure, the GMF contains very little energy and is often overwhelmed by noise and higher-level macro-structural vibrations. An effective signal processing method would be necessary to remove such corrupting noise and interference. In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented. First, to eliminate the frequency associated with interferential vibrations, the vibration signal is filtered with a band-pass filter determined by a Morlet wavelet whose parameters are selected or optimized based on maximum Kurtosis. Then, to further reduce the residual in-band noise and highlight the periodic impulsive feature, an autocorrelation enhancement algorithm is applied to the filtered signal. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers induce a load on the output joint shaft flanges. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. The gearbox used for experimental measurements is of the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive: a five-speed gearbox with final drive gear and front wheel differential. The results obtained from practical experiments prove that the proposed method is very effective for gear fault diagnosis.Keywords: wavelet analysis, pitted gear, autocorrelation, gear fault diagnosis
Procedia PDF Downloads 388105 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 69104 Immersive Environment as an Occupant-Centric Tool for Architecture Criticism and Architectural Education
Authors: Golnoush Rostami, Farzam Kharvari
Abstract:
In recent years, developments in the field of architectural education have resulted in a shift from conventional teaching methods to alternative state-of-the-art approaches in teaching methods and strategies. Criticism in architecture has been a key player both in the profession and education, but it has been mostly offered by renowned individuals. Hence, not only students or other professionals but also critics themselves may not have the option to experience buildings and rely on available 2D materials, such as images and plans, that may not result in a holistic understanding and evaluation of buildings. On the other hand, immersive environments provide students and professionals the opportunity to experience buildings virtually and reflect their evaluation by experiencing rather than judging based on 2D materials. Therefore, the aim of this study is to compare the effect of experiencing buildings in immersive environments and 2D drawings, including images and plans, on architecture criticism and architectural education. As a result, three buildings that have parametric brick facades were studied through 2D materials and in Unreal Engine v. 24 as an immersive environment among 22 architecture students that were selected using convenient sampling and were divided into two equal groups using simple random sampling. This study used mixed methods, including quantitative and qualitative methods; the quantitative section was carried out by a questionnaire, and deep interviews were used for the qualitative section. A questionnaire was developed for measuring three constructs, including privacy regulation based on Altman’s theory, the sufficiency of illuminance levels in the building, and the visual status of the view (visually appealing views based on obstructions that may have been caused by facades). Furthermore, participants had the opportunity to reflect their understanding and evaluation of the buildings in individual interviews. Accordingly, the collected data from the questionnaires were analyzed using independent t-test and descriptive analyses in IBM SPSS Statistics v. 26, and interviews were analyzed using the content analysis method. The results of the interviews showed that the participants who experienced the buildings in the immersive environment were able to have a thorough and more precise evaluation of the buildings in comparison to those who studied them through 2D materials. Moreover, the analyses of the respondents’ questionnaires revealed that there were statistically significant differences between measured constructs among the two groups. The outcome of this study suggests that integrating immersive environments into the profession and architectural education as an effective and efficient tool for architecture criticism is vital since these environments allow users to have a holistic evaluation of buildings for vigorous and sound criticism.Keywords: immersive environments, architecture criticism, architectural education, occupant-centric evaluation, pre-occupancy evaluation
Procedia PDF Downloads 134103 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128102 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region
Authors: Ahmed O. Apampa, Yinusa A. Jimoh
Abstract:
The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission
Procedia PDF Downloads 373