Search results for: IT engineers
16 Exposing The Invisible
Authors: Kimberley Adamek
Abstract:
According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.Keywords: wind engineering, climate, visualization, architectural aerodynamics
Procedia PDF Downloads 35815 Microsimulation of Potential Crashes as a Road Safety Indicator
Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale
Abstract:
Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.Keywords: road safety, traffic, traffic safety, traffic simulation
Procedia PDF Downloads 13514 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector
Authors: Sanaz Moayer, Fang Huang, Scott Gardner
Abstract:
In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management
Procedia PDF Downloads 41513 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 15812 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad
Authors: Prachi Buche Marathe
Abstract:
Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route
Procedia PDF Downloads 16111 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake
Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil
Abstract:
The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.Keywords: Gorkha earthquake, reconstruction, challenges, policy
Procedia PDF Downloads 40810 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool
Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad
Abstract:
In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling
Procedia PDF Downloads 2659 In-Depth Investigations on the Sequences of Accidents of Powered Two Wheelers Based on Police Crash Reports of Medan, North Sumatera Province Indonesia, Using Decision Aiding Processes
Authors: Bangun F., Crevits B., Bellet T., Banet A., Boy G. A., Katili I.
Abstract:
This paper seeks the incoherencies in cognitive process during an accident of Powered Two Wheelers (PTW) by understanding the factual sequences of events and causal relations for each case of accident. The principle of this approach is undertaking in-depth investigations on case per case of PTW accidents based on elaborate data acquisitions on accident sites that officially stamped in Police Crash Report (PCRs) 2012 of Medan with criteria, involved at least one PTW and resulted in serious injury and fatalities. The analysis takes into account four modules: accident chronologies, perpetrator, and victims, injury surveillance, vehicles and road infrastructures, comprising of traffic facilities, road geometry, road alignments and weather. The proposal for improvement could have provided a favorable influence on the chain of functional processes and events leading to collision. Decision Aiding Processes (DAP) assists in structuring different entities at different decisional levels, as each of these entities has its own objectives and constraints. The entities (A) are classified into 6 groups of accidents: solo PTW accidents; PTW vs. PTW; PTW vs. pedestrian; PTW vs. motor-trishaw; and PTW vs. other vehicles and consecutive crashes. The entities are also distinguished into 4 decisional levels: level of road users and street systems; operational level (crash-attended police officers or CAPO and road engineers), tactical level (Regional Traffic Police, Department of Transportation, and Department of Public Work), and strategic level (Traffic Police Headquarters (TCPHI)), parliament, Ministry of Transportation and Ministry of Public Work). These classifications will lead to conceptualization of Problem Situations (P) and Problem Formulations (I) in DAP context. The DAP concerns the sequences process of the incidents until the time the accident occurs, which can be modelled in terms of five activities of procedural rationality: identification on initial human features (IHF), investigation on proponents attributes (PrAT), on Injury Surveillance (IS), on the interaction between IHF and PrAt and IS (intercorrelation), then unravel the sequences of incidents; filtering and disclosure, which include: what needs to activate, modify or change or remove, what is new and what is priority. These can relate to the activation or modification or new establishment of law. The PrAt encompasses the problems of environmental, road infrastructure, road and traffic facilities, and road geometry. The evaluation model (MP) is generated to bridge P and I since MP is produced by the intercorrelations among IHF, PrAT and IS extracted from the PCRs 2012 of Medan. There are 7 findings of incoherences: lack of knowledge and awareness on the traffic regulations and the risks of accidents, especially when riding between 0 < x < 10 km from house, riding between 22 p.m.–05.30 a.m.; lack of engagements on procurement of IHF Data by CAPO; lack of competency of CAPO on data procurement in accident-sites; no intercorrelation among IHF and PrAt and IS in the database systems of PCRs; lack of maintenance and supervision on the availabilities and the capacities of traffic facilities and road infrastructure; instrumental bias with wash-back impacts towards the TCPHI; technical robustness with wash-back impacts towards the CAPO and TCPHI.Keywords: decision aiding processes, evaluation model, PTW accidents, police crash reports
Procedia PDF Downloads 1588 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method
Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López
Abstract:
The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people
Procedia PDF Downloads 1287 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters
Authors: Rahil Bahrami, Kaveh Ashenayi
Abstract:
This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion
Procedia PDF Downloads 1006 Laying the Proto-Ontological Conditions for Floating Architecture as a Climate Adaptation Solution for Rising Sea Levels: Conceptual Framework and Definition of a Performance Based Design
Authors: L. Calcagni, A. Battisti, M. Hensel, D. S. Hensel
Abstract:
Since the beginning of the 21st century, we have seen a dynamic growth of water-based (WB) architecture, mainly due to the increasing threat of floods caused by sea level rise and heavy rains, all correlated with climate change. At the same time, the shortage of land available for urban development also led architects, engineers, and policymakers to reclaim the seabed or to build floating structures. Furthermore, the drive to produce energy from renewable resources has expanded the sector of offshore research, mining, and energy industry which seeks new types of WB structures. In light of these considerations, the time is ripe to consider floating architecture as a full-fledged building typology. Currently, there is no universally recognized academic definition of a floating building. Research on floating architecture lacks a proper, commonly shared vocabulary and typology distinction. Moreover, there is no global international legal framework for urban development on water, and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, first of all, the research intends to overcome the semantic and typological issues through the conceptualization of floating architecture, laying the proto-ontological conditions for floating development, and secondly to identify the parameters to be considered in the definition of a specific PBBD framework, setting the scene for national planning strategies. The theoretical overview and re-semanticization process involve the attribution of a new meaning to the term floating architecture. This terminological work of semantic redetermination is carried out through a systematic literature review and involves quantitative and historical research as well as logical argumentation methods. As it is expected that floating urban development is most likely to take place as an extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than to those of the offshore industry. Therefore, the identification and categorization of parameters –looking towards the potential formation of a PBBD framework for floating development– takes the urban and architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics (i.e. stability and buoyancy) from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of regulatory systems that are effective in different countries around the world, addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, inhabiting water is proposed not only as a viable response to the problem of rising sea levels, thus as a resilient frontier for urban development, but also as a response to energy insecurity, clean water, and food shortages, environmental concerns, and urbanization, in line with Blue Economy principles and the Agenda 2030. This review shows how floating architecture is to all intents and purposes, an urban adaptation measure and a solution towards self-sufficiency and energy-saving objectives. Moreover, the adopted methodology is, to all extents, open to further improvements and integrations, thus not rigid and already completely determined. Along with new designs and functions that will come into play in the practice field, eventually, life on water will seem no more unusual than life on land, especially by virtue of the multiple advantages it provides not only to users but also to the environment.Keywords: adaptation measures, building typology, floating architecture, performance based building design, rising sea levels
Procedia PDF Downloads 975 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks
Authors: Christina Kirsch, Adam Hatzigiannis
Abstract:
Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis
Procedia PDF Downloads 1214 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds
Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee
Abstract:
Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.Keywords: histo-pathology, non invasive imaging, OCT, wound healing
Procedia PDF Downloads 2793 Damages of Highway Bridges in Thailand during the 2014-Chiang Rai Earthquake
Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom
Abstract:
On May 5, 2014, an earthquake of magnitude 6.3 Richter hit the Northern part of Thailand. The epicenter was in Phan District, Chiang Rai Province. This earthquake or the so-called 2014-Chiang Rai Earthquake is the strongest ground shaking that Thailand has ever been experienced in her modern history. The 2014-Chiang Rai Earthquake confirms the geological evidence, which has previously been ignored by most engineers, that earthquakes of considerable magnitudes 6 to 7 Richter can occurr within the country. This promptly stimulates authorized agencies to pay more attention at the safety of their assets and promotes the comprehensive review of seismic resistance design of their building structures. The focus of this paper is to summarize the damages of highway bridges as a result of the 2014-Chiang Rai ground shaking, the remedy actions, and the research needs. The 2014-Chiang Rai Earthquake caused considerable damages to nearby structures such as houses, schools, and temples. The ground shaking, however, caused damage to only one highway bridge, Mae Laos Bridge, located several kilometers away from the epicenter. The damage of Mae Laos Bridge was in the form of concrete spalling caused by pounding of cap beam on the deck structure. The damage occurred only at the end or abutment span. The damage caused by pounding is not a surprise, but the pounding by only one bridge requires further investigation and discussion. Mae Laos Bridge is a river crossing bridge with relatively large approach structure. In as much, the approach structure is confined by strong retaining walls. This results in a rigid-like approach structure which vibrates at the acceleration approximately equal to the ground acceleration during the earthquake and exerts a huge force to the abutment causing the pounding of cap beam on the deck structure. Other bridges nearby have relatively small approach structures, and therefore have no capability to generate pounding. The effect of mass of the approach structure on pounding of cap beam on the deck structure is also evident by the damage of one pedestrian bridge in front of Thanthong Wittaya School located 50 meters from Mae Laos Bridge. The width of the approach stair of this bridge is wider than the typical one to accommodate the stream of students during pre- and post-school times. This results in a relatively large mass of the approach stair which in turn exerts a huge force to the pier causing pounding of cap beam on the deck structure during ground shaking. No sign of pounding was observed for a typical pedestrian bridge located at another end of Mae Laos Bridge. Although pounding of cap beam on the deck structure of the above mentioned bridges does not cause serious damage to bridge structure, this incident promotes the comprehensive review of seismic resistance design of highway bridges in Thailand. Given a proper mass and confinement of the approach structure, the pounding of cap beam on the deck structure can be easily excited even at the low to moderate ground shaking. In as much, if the ground shaking becomes stronger, the pounding is certainly more powerful. This may cause the deck structure to be unseated and fall off in the case of unrestrained bridge. For the bridge with restrainer between cap beam and the deck structure, the restrainer may prevent the deck structure from falling off. However, preventing free movement of the pier by the restrainer may damage the pier itself. Most highway bridges in Thailand have dowel bars embedded connecting cap beam and the deck structure. The purpose of the existence of dowel bars is, however, not intended for any seismic resistance. Their ability to prevent the deck structure from unseating and their effect on the potential damage of the pier should be evaluated. In response to this expected situation, Thailand Department of Highways (DOH) has set up a team to revise the standard practices for the seismic resistance design of highway bridges in Thailand. In addition, DOH has also funded the research project 'Seismic Resistance Evaluation of Pre- and Post-Design Modifications of DOH’s Bridges' with the scope of full-scale tests of single span bridges under reversed cyclic static loadings for both longitudinal and transverse directions and computer simulations to evaluate the seismic performance of the existing bridges and the design modification bridges. The research is expected to start in October, 2015.Keywords: earthquake, highway bridge, Thailand, damage, pounding, seismic resistance
Procedia PDF Downloads 2902 Developing a Framework for Sustainable Social Housing Delivery in Greater Port Harcourt City Rivers State, Nigeria
Authors: Enwin Anthony Dornubari, Visigah Kpobari Peter
Abstract:
This research has developed a framework for the provision of sustainable and affordable housing to accommodate the low-income population of Greater Port Harcourt City. The objectives of this study among others, were to: examine UN-Habitat guidelines for acceptable and sustainable social housing provision, describe past efforts of the Rivers State Government and the Federal Government of Nigeria to provide housing for the poor in the Greater Port Harcourt City area; obtain a profile of prospective beneficiaries of the social housing proposed by this research as well as perceptions of their present living conditions, and living in the proposed self-sustaining social housing development, based on the initial simulation of the proposal; describe the nature of the framework, guideline and management of the proposed social housing development and explain the modalities for its implementation. The study utilized the mixed methods research approach, aimed at triangulating findings from the quantitative and qualitative paradigms. Opinions of professional of the built environment; Director, Development Control, Greater Port Harcourt City Development Authority; Directors of Ministry of Urban Development and Physical Planning; Housing and Property Development Authority and managers of selected Primary Mortgage Institutions were sought and analyzed. There were four target populations for the study, namely: members of occupational sub-groups for FGDs (Focused Group Discussions); development professionals for KIIs (Key Informant Interviews), household heads in selected communities of GPHC; and relevant public officials for IDI (Individual Depth Interview). Focus Group Discussions (FGDs) were held with members of occupational sub-groups in each of the eight selected communities (Fisherfolk). The table shows that there were forty (40) members across all occupational sub-groups in each selected community, yielding a total of 320 in the eight (8) communities of Mgbundukwu (Mile 2 Diobu), Rumuodomaya, Abara (Etche), Igwuruta-Ali(Ikwerre), Wakama(Ogu-Bolo), Okujagu (Okrika), Akpajo (Eleme), and Okoloma (Oyigbo). For key informant interviews, two (2) members were judgmentally selected from each of the following development professions: urban and regional planners; architects; estate surveyors; land surveyors; quantity surveyors; and engineers. Concerning Population 3-Household Heads in Selected Communities of GPHC, a stratified multi-stage sampling procedure was adopted: Stage 1-Obtaining a 10% (a priori decision) sample of the component communities of GPHC in each stratum. The number in each stratum was rounded to one whole number to ensure representation of each stratum. Stage 2-Obtaining the number of households to be studied after applying the Taro Yamane formula, which aided in determining the appropriate number of cases to be studied at the precision level of 5%. Findings revealed, amongst others, that poor implementation of the UN-Habitat global shelter strategy, lack of stakeholder engagement, inappropriate locations, undue bureaucracy, lack of housing fairness and equity and high cost of land and building materials were the reasons for the failure of past efforts towards social housing provision in the Greater Port Harcourt City area. The study recommended a public-private partnership approach for the implementation and management of the framework. It also recommended a robust and sustained relationship between the management of the framework and the UN-Habitat office and other relevant government agencies responsible for housing development and all investment partners to create trust and efficiency.Keywords: development, framework, low-income, sustainable, social housing
Procedia PDF Downloads 2491 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 150