Search results for: dynamically reconfigurable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 289

Search results for: dynamically reconfigurable

49 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion

Authors: Radim Sip, Denisa Denglerova

Abstract:

It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.

Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion

Procedia PDF Downloads 120
48 Dynamic Ambulance Deployment to Reduce Ambulance Response Times Using Geographic Information Systems

Authors: Masoud Swalehe, Semra Günay

Abstract:

Developed countries are losing many lives to non-communicable diseases as compared to their developing counterparts. The effects of these diseases are mostly sudden and manifest at a very short time prior to death or a dangerous attack and this has consolidated the significance of emergency medical system (EMS) as one of the vital areas of healthcare service delivery. The primary objective of this research is to reduce ambulance response times (RT) of Eskişehir province EMS since a number of studies have established a relationship between ambulance response times and survival chances of patients especially out of hospital cardiac arrest (OHCA) victims. It has been found out that patients who receive out of hospital medical attention in few (4) minutes after cardiac arrest because of low ambulance response times stand higher chances of survival than their counterparts who take longer times (more than 12 minutes) to receive out of hospital medical care because of higher ambulance response times. The study will make use of geographic information systems (GIS) technology to dynamically reallocate ambulance resources according to demand and time so as to reduce ambulance response times. Geospatial-time distribution of ambulance calls (demand) will be used as a basis for optimal ambulance deployment using system status management (SSM) strategy to achieve much demand coverage with the same number of ambulance resources to cause response time reduction. Drive-time polygons will be used to come up with time specific facility coverage areas and suggesting additional facility candidate sites where ambulance resources can be moved to serve higher demands making use of network analysis techniques. Emergency Ambulance calls’ data from 1st January 2014 to 31st December 2014 obtained from Eskişehir province health directorate will be used in this study. This study will focus on the reduction of ambulance response times which is a key Emergency Medical Services performance indicator.

Keywords: emergency medical services, system status management, ambulance response times, geographic information system, geospatial-time distribution, out of hospital cardiac arrest

Procedia PDF Downloads 279
47 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 240
46 Florida’s Groundwater and Surface Water System Reliability in Terms of Climate Change and Sea-Level Rise

Authors: Rahman Davtalab

Abstract:

Florida is one of the most vulnerable states to natural disasters among the 50 states of the USA. The state exposed by tropical storms, hurricanes, storm surge, landslide, etc. Besides, the mentioned natural phenomena, global warming, sea-level rise, and other anthropogenic environmental changes make a very complicated and unpredictable system for decision-makers. In this study, we tried to highlight the effects of climate change and sea-level rise on surface water and groundwater systems for three different geographical locations in Florida; Main Canal of Jacksonville Beach (in the northeast of Florida adjacent to the Atlantic Ocean), Grace Lake in central Florida, far away from surrounded coastal line, and Mc Dill in Florida and adjacent to Tampa Bay and Mexican Gulf. An integrated hydrologic and hydraulic model was developed and simulated for all three cases, including surface water, groundwater, or a combination of both. For the case study of Main Canal-Jacksonville Beach, the investigation showed that a 76 cm sea-level rise in time horizon 2060 could increase the flow velocity of the tide cycle for the main canal's outlet and headwater. This case also revealed how the sea level rise could change the tide duration, potentially affecting the coastal ecosystem. As expected, sea-level rise can raise the groundwater level. Therefore, for the Mc Dill case, the effect of groundwater rise on soil storage and the performance of stormwater retention ponds is investigated. The study showed that sea-level rise increased the pond’s seasonal high water up to 40 cm by time horizon 2060. The reliability of the retention pond is dropped from 99% for the current condition to 54% for the future. The results also proved that the retention pond could not retain and infiltrate the designed treatment volume within 72 hours, which is a significant indication of increasing pollutants in the future. Grace Lake case study investigates the effects of climate change on groundwater recharge. This study showed that using the dynamically downscaled data of the groundwater recharge can decline up to 24% by the mid-21st century.

Keywords: groundwater, surface water, Florida, retention pond, tide, sea level rise

Procedia PDF Downloads 155
45 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data

Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang

Abstract:

The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.

Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds

Procedia PDF Downloads 86
44 Numerical Analysis of CO₂ Storage as Clathrates in Depleted Natural Gas Hydrate Formation

Authors: Sheraz Ahmad, Li Yiming, Li XiangFang, Xia Wei, Zeen Chen

Abstract:

Holding CO₂ at massive scale in the enclathrated solid matter called hydrate can be perceived as one of the most reliable methods for CO₂ sequestration to take greenhouse gases emission control measures and global warming preventive actions. In this study, a dynamically coupled mass and heat transfer mathematical model is developed which elaborates the unsteady behavior of CO₂ flowing into a porous medium and converting itself into hydrates. The combined numerical model solution by implicit finite difference method is explained and through coupling the mass, momentum and heat conservation relations, an integrated model can be established to analyze the CO₂ hydrate growth within P-T equilibrium conditions. CO₂ phase transition, effect of hydrate nucleation by exothermic heat release and variations of thermo-physical properties has been studied during hydrate nucleation. The results illustrate that formation pressure distribution becomes stable at the early stage of hydrate nucleation process and always remains stable afterward, but formation temperature is unable to keep stable and varies during CO₂ injection and hydrate nucleation process. Initially, the temperature drops due to cold high-pressure CO₂ injection since when the massive hydrate growth triggers and temperature increases under the influence of exothermic heat evolution. Intermittently, it surpasses the initial formation temperature before CO₂ injection initiates. The hydrate growth rate increases by increasing injection pressure in the long formation and it also expands overall hydrate covered length in the same induction period. The results also show that the injection pressure conditions and hydrate growth rate affect other parameters like CO₂ velocity, CO₂ permeability, CO₂ density, CO₂ and H₂O saturation inside the porous medium. In order to enhance the hydrate growth rate and expand hydrate covered length, the injection temperature is reduced, but it did not give satisfactory outcomes. Hence, CO₂ injection in vacated natural gas hydrate porous sediment may form hydrate under low temperature and high-pressure conditions, but it seems very challenging on a huge scale in lengthy formations.

Keywords: CO₂ hydrates, CO₂ injection, CO₂ Phase transition, CO₂ sequestration

Procedia PDF Downloads 99
43 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 196
42 Knowledge Capital and Manufacturing Firms’ Innovation Management: Exploring the Impact of Transboundary Investment and Assimilative Capacity.

Authors: Suleman Bawa, Ayiku Emmanuel Lartey

Abstract:

Purpose - This paper aims to examine the association between knowledge capital and multinational firms’ innovation management. We again explored the impact of transboundary investment and assimilative capacity between knowledge capital and multinational firms’ innovation management. The vital position of knowledge capital and multinational firms’ innovation management in today’s increasingly volatile environment coupled with fierce competition has been extensively acknowledged by academics and industry investment capitals. Design/methodology/approach - The theoretical association model and an empirical correlation analysis were constructed based on relevant research using data collected from 19 multinational firms in Ghana as the subject, and path analysis was constructed using SPSS 22.0 and AMOS 24.0 to test the formulated hypotheses. Findings - Varied conclusions are drawn consequential from theoretical inferences and empirical tests. For multinational firms, knowledge capital relics positively significant to multinational firms’ innovation management. Multinational firms with advanced knowledge capital likely spawn greater corporations’ innovation management. Second, transboundary investment efficiently intermediates the association between knowledge physical capital, knowledge interactive capital, and corporations’ innovation management. At the same time, this impact is insignificant between knowledge of empirical capital and corporations’ innovation management. Lastly, the impact of transboundary investment and assimilative capacity on the association between knowledge capital and corporations’ innovation management is established. We summarized the implications for managers based on our outcomes. Research limitations/implications - Multinational firms must dynamically build knowledge capital to augment corporations’ innovation management. Conversely, knowledge capital motivates multinational firms to implement transboundary investment and cultivate assimilative capacity. Accordingly, multinational firms can efficiently exploit diverse information to augment their corporate innovation management. Practical implications – This paper presents a comprehensive justification of knowledge capital and manufacturing firms’ innovation management by exploring the impact of transboundary investment and assimilative capacity within the manufacturing industry, its sequential progress, and its associated challenges. Originality/value – This paper is amongst the first to find empirical results to back knowledge capital and manufacturing firms’ innovation management by exploring the impact of transboundary investment and assimilative capacity within the manufacturing industry. Additionally, aligning knowledge as a coordinative instrument is a significant input to our discernment in this area.

Keywords: knowledge capital, transboundary investment, innovation management, assimilative capacity

Procedia PDF Downloads 35
41 Analysis of the Operating Load of Gas Bearings in the Gas Generator of the Turbine Engine during a Deceleration to Dash Maneuver

Authors: Zbigniew Czyz, Pawel Magryta, Mateusz Paszko

Abstract:

The paper discusses the status of loads acting on the drive unit of the unmanned helicopter during deceleration to dash maneuver. Special attention was given for the loads of bearings in the gas generator turbine engine, in which will be equipped a helicopter. The analysis was based on the speed changes as a function of time for manned flight of helicopter PZL W3-Falcon. The dependence of speed change during the flight was approximated by the least squares method and then determined for its changes in acceleration. This enabled us to specify the forces acting on the bearing of the gas generator in static and dynamic conditions. Deceleration to dash maneuvers occurs in steady flight at a speed of 222 km/h by horizontal braking and acceleration. When the speed reaches 92 km/h, it dynamically changes an inclination of the helicopter to the maximum acceleration and power to almost maximum and holds it until it reaches its initial speed. This type of maneuvers are used due to ineffective shots at significant cruising speeds. It is, therefore, important to reduce speed to the optimum as soon as possible and after giving a shot to return to the initial speed (cruising). In deceleration to dash maneuvers, we have to deal with the force of gravity of the rotor assembly, gas aerodynamics forces and the forces caused by axial acceleration during this maneuver. While we can assume that the working components of the gas generator are designed so that axial gas forces they create could balance the aerodynamic effects, the remaining ones operate with a value that results from the motion profile of the aircraft. Based on the analysis, we can make a compilation of the results. For this maneuver, the force of gravity (referring to statistical calculations) respectively equals for bearing A = 5.638 N and bearing B = 1.631 N. As overload coefficient k in this direction is 1, this force results solely from the weight of the rotor assembly. For this maneuver, the acceleration in the longitudinal direction achieved value a_max = 4.36 m/s2. Overload coefficient k is, therefore, 0.44. When we multiply overload coefficient k by the weight of all gas generator components that act on the axial bearing, the force caused by axial acceleration during deceleration to dash maneuver equals only 3.15 N. The results of the calculations are compared with other maneuvers such as acceleration and deceleration and jump up and jump down maneuvers. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: gas bearings, helicopters, helicopter maneuvers, turbine engines

Procedia PDF Downloads 307
40 Web Development in Information Technology with Javascript, Machine Learning and Artificial Intelligence

Authors: Abdul Basit Kiani, Maryam Kiani

Abstract:

Online developers now have the tools necessary to create online apps that are not only reliable but also highly interactive, thanks to the introduction of JavaScript frameworks and APIs. The objective is to give a broad overview of the recent advances in the area. The fusion of machine learning (ML) and artificial intelligence (AI) has expanded the possibilities for web development. Modern websites now include chatbots, clever recommendation systems, and customization algorithms built in. In the rapidly evolving landscape of modern websites, it has become increasingly apparent that user engagement and personalization are key factors for success. To meet these demands, websites now incorporate a range of innovative technologies. One such technology is chatbots, which provide users with instant assistance and support, enhancing their overall browsing experience. These intelligent bots are capable of understanding natural language and can answer frequently asked questions, offer product recommendations, and even help with troubleshooting. Moreover, clever recommendation systems have emerged as a powerful tool on modern websites. By analyzing user behavior, preferences, and historical data, these systems can intelligently suggest relevant products, articles, or services tailored to each user's unique interests. This not only saves users valuable time but also increases the chances of conversions and customer satisfaction. Additionally, customization algorithms have revolutionized the way websites interact with users. By leveraging user preferences, browsing history, and demographic information, these algorithms can dynamically adjust the website's layout, content, and functionalities to suit individual user needs. This level of personalization enhances user engagement, boosts conversion rates, and ultimately leads to a more satisfying online experience. In summary, the integration of chatbots, clever recommendation systems, and customization algorithms into modern websites is transforming the way users interact with online platforms. These advanced technologies not only streamline user experiences but also contribute to increased customer satisfaction, improved conversions, and overall website success.

Keywords: Javascript, machine learning, artificial intelligence, web development

Procedia PDF Downloads 39
39 Teachers Leadership Dimension in History Learning

Authors: Lee Bih Ni, Zulfhikar Rabe, Nurul Asyikin Hassan

Abstract:

The Ministry of Education Malaysia dynamically and drastically made the subject of History mandatory to be in force in 2013. This is in recognition of the nation's heritage and treasures in maintaining true facts and information for future generations of the State. History reveals the civilization of a nation and the fact of national cultural heritage. Civilization needs to be preserved as a legacy of sovereign heritage. Today's generation is the catalyst for future heirs who will support the principle and direction of the country. In line with the National Education Philosophy that aims to shape the potential development of individuals holistically and uniquely in order to produce a balanced and harmonious student in terms of intellectual, spiritual, emotional and physical. Hence, understanding the importance of studying the history subject as a pillar of identity and the history of nationhood is to be a priority in the pursuit of knowledge and empowering the spirit of statehood that is nurtured through continuous learning at school. Judging from the aspect of teacher leadership role in integrating history in a combined way based on Teacher Education Philosophy. It empowers the teaching profession towards the teacher to support noble character. It also supports progressive and scientific views. Teachers are willing to uphold the State's aspirations and celebrate the country's cultural heritage. They guarantee individual development and maintain a united, democratic, progressive and disciplined society. Teacher's role as a change and leadership agent in education begins in the classroom through formal or informal educational processes. This situation is expanded in schools, communities and countries. The focus of this paper is on the role of teacher leadership influencing the effectiveness of teaching and learning history in the classroom environment. Leadership guides to teachers' perceptions on the role of teacher leadership, teaching leadership, and the teacher leadership role and effective teacher leadership role. Discussions give emphasis on aspects of factors affecting the classroom environment, forming the classroom agenda, effective classroom implementation methods, suitable climate for historical learning and teacher challenges in implicating the effectiveness of teaching and learning processes.

Keywords: teacher leadership, leadership lessons, effective classroom, effective teacher

Procedia PDF Downloads 257
38 Design of Data Management Software System Supporting Rendezvous and Docking with Various Spaceships

Authors: Zhan Panpan, Lu Lan, Sun Yong, He Xiongwen, Yan Dong, Gu Ming

Abstract:

The function of the two spacecraft docking network, the communication and control of a docking target with various spacecrafts is realized in the space lab data management system. In order to solve the problem of the complex data communication mode between the space lab and various spaceships, and the problem of software reuse caused by non-standard protocol, a data management software system supporting rendezvous and docking with various spaceships has been designed. The software system is based on CCSDS Spcecraft Onboard Interface Service(SOIS). It consists of Software Driver Layer, Middleware Layer and Appliaction Layer. The Software Driver Layer hides the various device interfaces using the uniform device driver framework. The Middleware Layer is divided into three lays, including transfer layer, application support layer and system business layer. The communication of space lab plaform bus and the docking bus is realized in transfer layer. Application support layer provides the inter tasks communitaion and the function of unified time management for the software system. The data management software functions are realized in system business layer, which contains telemetry management service, telecontrol management service, flight status management service, rendezvous and docking management service and so on. The Appliaction Layer accomplishes the space lab data management system defined tasks using the standard interface supplied by the Middleware Layer. On the basis of layered architecture, rendezvous and docking tasks and the rendezvous and docking management service are independent in the software system. The rendezvous and docking tasks will be activated and executed according to the different spaceships. In this way, the communication management functions in the independent flight mode, the combination mode of the manned spaceship and the combination mode of the cargo spaceship are achieved separately. The software architecture designed standard appliction interface for the services in each layer. Different requirements of the space lab can be supported by the use of standard services per layer, and the scalability and flexibility of the data management software can be effectively improved. It can also dynamically expand the number and adapt to the protocol of visiting spaceships. The software system has been applied in the data management subsystem of the space lab, and has been verified in the flight of the space lab. The research results of this paper can provide the basis for the design of the data manage system in the future space station.

Keywords: space lab, rendezvous and docking, data management, software system

Procedia PDF Downloads 343
37 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal

Authors: Anindita Patra, Prasad K. Bhaskaran

Abstract:

The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.

Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves

Procedia PDF Downloads 204
36 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 246
35 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 151
34 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing

Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş

Abstract:

Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.

Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model

Procedia PDF Downloads 56
33 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading

Authors: Binger Lu

Abstract:

It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.

Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading

Procedia PDF Downloads 110
32 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 316
31 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 151
30 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 31
29 Particle Separation Using Individually-Controlled Magnetic Soft Artificial Cilia

Authors: Yau-Luen Ng, Nathan Banka, Santosh Devasia

Abstract:

In this paper, a method based on soft artificial cilia is introduced to separate particles based on size and mass. In nature, cilia are used for fluid propulsion in the mammalian circulatory system, as well as for swimming and size-selective particle entrainment for feeding in microorganisms. Inspired by biological cilia, an array of artificial cilia was fabricated using Polydimethylsiloxane (PDMS) to simulate the actual motion. A row of four individually-controlled magnetic artificial cilia in a semi-circular channel are actuated by the magnetic fields from four permanent magnets. Each cilium is a slender rectangular cantilever approximately 13mm long made from a composite of PDMS and carbonyl iron particles. A time-varying magnetic force is achieved by periodically varying the out-of-plane distance from the permanent magnets to the cilia, resulting in large-amplitude deflections of the cilia that can be used to drive fluid motion. Previous results have shown that this system of individually-controlled magnetic cilia can generate effective mixing flows; the purpose of the present work is to apply the individual cilia control to a particle separation task. Based on the observed beating patterns of cilia arrays in nature, the experimental beating patterns were selected as a metachronal wave, in which a fixed phase lead or lag is imposed between adjacent cilia. Additionally, the beating frequency was varied. For each set of experimental parameters, the channel was filled with water and polyethylene microspheres introduced at the center of the cilia array. Two types of particles were used: large red microspheres with density 0.9971 g/cm³ and 850-1000 μm avg. diameter, and small blue microspheres with density 1.06 g/cm³ and diameter 30 μm. At low beating frequencies, all particles were propelled in the mean flow direction. However, the large particles were observed to reverse directions above about 4.8 Hz, whereas reversal of the small particle transport direction did not occur until 6 Hz. Between these two transition frequencies, the large and small particles can be separated as they move in opposite directions. The experimental results show that selecting an appropriate cilia beating pattern can lead to selective transport of neutrally-buoyant particles based on their size. Importantly, the separation threshold can be chosen dynamically by adjusting the actuation frequency. However, further study is required to determine the range of particle sizes that can be effectively separated for a given system geometry.

Keywords: magnetic cilia, particle separation, tunable separation, soft actutors

Procedia PDF Downloads 177
28 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model

Authors: Yew Mun Yip, Dawei Zhang

Abstract:

Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.

Keywords: hydrogen bond, polarization effect, protein folding, PSBC

Procedia PDF Downloads 230
27 Floating Populations, Rooted Networks Tracing the Evolution of Russeifa City in Relation to Marka Refugee Camp

Authors: Dina Dahood Dabash

Abstract:

Refugee camps are habitually defined as receptive sites, transient spaces of exile and nondescript depoliticized places of exception. However, such arguments form partial sides of reality, especially in countries that are geopolitically challenged and rely immensely on international aid. In Jordan, the dynamics brought with the floating population of refugees (Palestinian amongst others) have resulted in spatial after-effects that cannot be easily overlooked. For instance, Palestine refugee camps have turned by time into socioeconomic centers of gravity and cores of spatial evolution. Yet, such a position is not instantaneous. Amongst various reasons, it can be related, according to this paper, to a distinctive institutional climate that has been co-produced by the refugees, host community and the state. This paper aims to investigate the evolution of urban and spatial regulations in Jordan between 1948 and 1995, more specifically, state regulations, community regulations and refugee-self-regulation that all dynamically interacted that period. The paper aims to unpack the relations between refugee camps and their environs to further explore the agency of such floating populations in establishing rooted networks that extended the time and place boundaries. The paper’s argument stems from the fact that the spatial configuration of urban systems is not only an outcome of a historical evolutionary process but is also a result of interactions between the actors. The research operationalizes Marka camp in Jordan as a case study. Marka Camp is one of the six "emergency" camps erected in 1968 to shelter 15,000 Palestine refugees and displaced persons who left the West Bank and Gaza Strip. Nowadays, camp shelters more than 50,000 refugees in the same area of land. The camp is located in Russeifa, a city in Zarqa Governorate in Jordan. Together with Amman and Zarqa, Russeifa is part of a larger metropolitan area that acts as a home to more than half of Jordan’s businesses. The paper aspires to further understand the post-conflict strategies which were historically applied in Jordan and can be employed to handle more recent geopolitical challenges such as the Syrian refugee crisis. Methodological framework: The paper traces the evolution of the refugee-camp regulating norms in Jordan, parallel with the horizontal and vertical evolution of the Marka camp and its surroundings. Consequently, the main methods employed are historical and mental tracing, Interviews, in addition to using available Aerial and archival photos of the Marka camp and its surrounding.

Keywords: forced migration, Palestine refugee camps, spatial agency, urban regulations

Procedia PDF Downloads 158
26 Geospatial Analysis of Spatio-Temporal Dynamic and Environmental Impact of Informal Settlement: A Case of Adama City, Ethiopia

Authors: Zenebu Adere Tola

Abstract:

Informal settlements behave dynamically over space and time and the number of people living in such housing areas is growing worldwide. In the cities of developing countries especially in sub-Saharan Africa, poverty, unemployment rate, poor living condition, lack transparency and accountability, lack of good governance are the major factors to contribute for the people to hold land informally and built houses for residential or other purposes. In most of Ethiopian cities informal settlement is highly seen in peripheral areas this is because people can easily to hold land for housing from local farmers, brokers, speculators without permission from concerning bodies. In Adama informal settlement has created risky living conditions and led to environmental problems in natural areas the main reason for this was the lack of sufficient knowledge about informal settlement development. On the other side there is a strong need to transform informal into formal settlements and to gain more control about the actual spatial development of informal settlements. In another hand to tackle the issue it is at least very important to understand the scale of the problem. To understand the scale of the problem it is important to use up-to-date technology. For this specific problem, it is good to use high-resolution imagery to detect informal settlement in Adama city. The main objective of this study is to assess the spatiotemporal dynamics and environmental impacts of informal settlement using OBIA. Specifically, the objective of this study is to; identify informal settlement in the study area, determine the change in the extent and pattern of informal settlement and to assess the environmental and social impacts of informal settlement in the study area. The methods to be used to detect the informal settlement is object-oriented image analysis. Consequently, reliable procedures for detecting the spatial behavior of informal settlements are required in order to react at an early stage to changing housing situations. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning. Using data for this study aerial photography for growth and change of informal settlements in Adama city. Software ECognition software for classy to built-up and non-built areas. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning.

Keywords: informal settlement, change detection, environmental impact, object based analysis

Procedia PDF Downloads 31
25 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops

Authors: Simon Komesker, Achim Wagner, Martin Ruskowski

Abstract:

In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.

Keywords: holonic manufacturing system, modular production system, planning, and control, system structure

Procedia PDF Downloads 145
24 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 224
23 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 38
22 Reinventing Smart Tourism via Use of Smart Gamified and Gaming Applications in Greece

Authors: Sofia Maria Poulimenou, Ioannis Deliyannis, Elisavet Filippidou, Stamatella Laboura

Abstract:

Smart technologies are being actively used to improve the experience of travel and promote or demote a destination’s reputation via a wide variety of social media applications and platforms. This paper conceptualises the design and deployment of smart management apps to promote culture, sustainability and accessibility within two destinations in Greece that represent the extremes of visiting scale. One is the densely visited Corfu, which is a UNESCO’s heritage site. The problems caused by the lack of organisation of the visiting experience and infrastructures affect all parties interacting within the site: visitors, citizens, public and private sector. Second is Kilkis, a low tourism destination with high seasonality and mostly inbound tourism. Here the issue faced is that traditional approaches to inform and motivate locals and visitors to explore and taste of the culture have not flourished. The problem is apprehended via the design and development of two systems named “Hologrammatic Corfu” for Corfu old town and “BRENDA” for the area of Kilkis. Although each system is designed independently, featuring different solutions to the problems, both approaches have been designed by the same team and a novel gaming and gamification methodology. The “Hologramatic Corfu” application has been designed, for the exploration of the site covering user requirments before, during and after the trip, with the use of transmedia content such as photos, 360-degree videos, augmented reality and hologrammatic videos. Also, a statistical analysis of travellers’ visits to specific points of interest is actively utilized enabling visitors to dynamically re-rooted during their visit, safeguarding sustainability and accessibility and inclusivity along the entire tourism cycle. “BRENDA” is designed specifically to promote gastronomic and historical tourism. This serious game implements and combines gaming and gamification elements in order to connect local businesses with cultural points of interest. As the environment of the project has a strong touristic orientation, “BRENDA” supports food-related gamified processes and historical games involving active participation of both local communities (content providers) and visitors (players) which are more likely to be successfully performed in the informal environment of travelling and promote sustainable tourism experiences. Finally, the paper presents the ability to re-use existing gaming components within new areas of interest via minimal adaptation and the use of transmedia aspects that enables destinations to be rebranded into smart destinations.

Keywords: smart tourism, gamification, user experience, transmedia content

Procedia PDF Downloads 133
21 Effect of Two Types of Shoe Insole on the Dynamics of Lower Extremities Joints in Individuals with Leg Length Discrepancy during Stance Phase of Walking

Authors: Mansour Eslami, Fereshte Habibi

Abstract:

Limb length discrepancy (LLD), or anisomeric, is defined as a condition in which paired limbs are noticeably unequal. Individuals with LLD during walking use compensatory mechanisms to dynamically lengthen the short limb and shorten the long limb to minimize the displacement of the body center of mass and consequently reduce body energy expenditure. Due to the compensatory movements created, LLD greater than 1 cm increases the odds of creating lumbar problems and hip and knee osteoarthritis. Insoles are non-surgical therapies that are recommended to improve the walking pattern, pain and create greater symmetry between the two lower limbs. However, it is not yet clear what effect insoles have on the variables related to injuries during walking. The aim of the present study was to evaluate the effect of internal and external heel lift insoles on pelvic kinematic in sagittal and frontal planes and lower extremity joint moments in individuals with mild leg length discrepancy during the stance phase of walking. Biomechanical data of twenty-eight men with structural leg length discrepancy of 10-25 mm were collected while they walked under three conditions: shoes without insole (SH), with internal heel lift insoles (IHLI) in shoes, and with external heal lift insole (EHLI). The tests were performed for both short and long legs. The pelvic kinematic and joint moment were measured with a motion capture system and force plate. Five walking trials were performed for each condition. The average value of five successful trials was used for further statistical analysis. Repeated measures ANCOVA with Bonferroni post hoc test were used for between-group comparisons (p ≤ 0.05). In both internal and external heel lift insoles (IHLI, EHLI), there was a significant decrease in the peak values of lateral and anterior pelvic tilts of the long leg, hip, and knee moments of a long leg and ankle moment of short leg (p ≤ 0.05). Furthermore, significant increases in peak values of lateral and anterior pelvic tilt of short leg in IHLI and EHLI were observed as compared to Shoe (SH) condition (p ≤ 0.01). In addition, a significant difference was observed between the IHLI and EHLI conditions in peak anterior pelvic tilt of long leg and plantar flexor moment of short leg (p=0.04; p= 0.04 respectively). Our findings indicate that both IHLI and EHLI can play an important role in controlling excessive pelvic movements in the sagittal and frontal planes in individuals with mild LLD during walking. Furthermore, the EHLI may have a better effect in preventing musculoskeletal injuries compared to the IHLI.

Keywords: kinematic, leg length discrepancy, shoe insole, walking

Procedia PDF Downloads 93
20 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 63