Search results for: Direct approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5780

Search results for: Direct approach

230 Potential Climate Change Impacts on the Hydrological System of the Harvey River Catchment

Authors: Hashim Isam Jameel Al-Safi, P. Ranjan Sarukkalige

Abstract:

Climate change is likely to impact the Australian continent by changing the trends of rainfall, increasing temperature, and affecting the accessibility of water quantity and quality. This study investigates the possible impacts of future climate change on the hydrological system of the Harvey River catchment in Western Australia by using the conceptual modelling approach (HBV mode). Daily observations of rainfall and temperature and the long-term monthly mean potential evapotranspiration, from six weather stations, were available for the period (1961-2015). The observed streamflow data at Clifton Park gauging station for 33 years (1983-2015) in line with the observed climate variables were used to run, calibrate and validate the HBV-model prior to the simulation process. The calibrated model was then forced with the downscaled future climate signals from a multi-model ensemble of fifteen GCMs of the CMIP3 model under three emission scenarios (A2, A1B and B1) to simulate the future runoff at the catchment outlet. Two periods were selected to represent the future climate conditions including the mid (2046-2065) and late (2080-2099) of the 21st century. A control run, with the reference climate period (1981-2000), was used to represent the current climate status. The modelling outcomes show an evident reduction in the mean annual streamflow during the mid of this century particularly for the A1B scenario relative to the control run. Toward the end of the century, all scenarios show a relatively high reduction trends in the mean annual streamflow, especially the A1B scenario, compared to the control run. The decline in the mean annual streamflow ranged between 4-15% during the mid of the current century and 9-42% by the end of the century.

Keywords: Climate change impact, Harvey catchment, HBV model, hydrological modelling, GCMs, LARS-WG, Australia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1434
229 Failure to React Positively to Flood Early Warning Systems: Lessons Learned by Flood Victims from Flash Flood Disasters: The Malaysia Experience

Authors: Mohamad Sukeri Khalid, Che Su Mustaffa, Mohd Najib Marzuki, Mohd Fo’ad Sakdan, Sapora Sipon, Mohd Taib Ariffin, Shazwani Shafiai

Abstract:

This paper describes the issues relating to the role of the flash flood early warning system provided by the Malaysian Government to the communities in Malaysia, specifically during the flash flood disaster in the Cameron Highlands, Malaysia. Normally, flash flood disasters can occur as a result of heavy rainfall in an area, and that water may possibly cause flooding via streams or narrow channels. The focus of this study is the flash flood disaster which occurred on 23 October 2013 in the Cameron Highlands, and as a result the Sungai Bertam overflowed after the release of water from the Sultan Abu Bakar Dam. This release of water from the dam caused flash flooding which led to damage to properties and also the death of residents and livestock in the area. Therefore, the effort of this study is to identify the perceptions of the flash flood victims on the role of the flash flood early warning system. For the purposes of this study, data were gathered through face-to-face interviews from those flood victims who were willing to participate in this study. This approach helped the researcher to glean in-depth information about their feelings and perceptions of the role of the flash flood early warning system offered by the government. The data were analysed descriptively and the findings show that the respondents of 22 flood victims believe strongly that the flash flood early warning system was confusing and dysfunctional, and communities had failed to response positively to it. Therefore, most of the communities were not well prepared for the releasing of water from the dam which caused property damage, and 3 people were killed in the Cameron Highland flash flood disaster.

Keywords: Communities affected, disaster management, early warning system, flash flood disaster.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2825
228 A Settlement Strategy for Health Facilities in Emerging Countries: A Case Study in Brazil

Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Piero Favino, Luca Preis

Abstract:

A settlement strategy is to anticipate and respond the needs of existing and future communities through the provision of primary health care facilities in marginalized areas. Access to a health care network is important to improving healthcare coverage, often lacking, in developing countries. The study explores that a good sanitary system strategy of rural contexts brings advantages to an existing settlement: improving transport, communication, water and social facilities. The objective of this paper is to define a possible methodology to implement primary health care facilities in disadvantaged areas of emerging countries. In this research, we analyze the case study of Lauro de Freitas, a municipality in the Brazilian state of Bahia, part of the Metropolitan Region of Salvador, with an area of 57,662 km² and 194.641 inhabitants. The health localization system in Lauro de Freitas is an integrated process that involves not only geographical aspects, but also a set of factors: population density, epidemiological data, allocation of services, road networks, and more. Data were collected also using semi-structured interviews and questionnaires to the local population. Synthesized data suggest that moving away from the coast where there is the greatest concentration of population and services, a network of primary health care facilities is able to improve the living conditions of small-dispersed communities. Based on the health service needs of populations, we have developed a methodological approach that is particularly useful in rural and remote contexts in emerging countries.

Keywords: Primary health care, developing countries, policy health planning, settlement strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 981
227 Military Court’s Jurisdiction over Military Members Who Commit General Crimes under Indonesian Military Judiciary System in Comparison with Other Countries

Authors: Dini Dewi Heniarti

Abstract:

The importance of this study is to understand how Indonesian military court asserts its jurisdiction over military members who commit general crimes within the Indonesian military judiciary system in comparison to other countries. This research employs a normative-juridical approach in combination with historical and comparative-juridical approaches. The research specification is analytical-descriptive in nature, i.e. describing or outlining the principles, basic concepts, and norms related to military judiciary system, which are further analyzed within the context of implementation and as the inputs for military justice regulation under the Indonesian legal system. Main data used in this research are secondary data, including primary, secondary and tertiary legal sources. The research focuses on secondary data, while primary data are supplementary in nature. The validity of data is checked using multi-methods commonly known as triangulation, i.e. to reflect the efforts to gain an in-depth understanding of phenomena being studied. Here, the military element is kept intact in the judiciary process with due observance of the Military Criminal Justice System and the Military Command Development Principle. The Indonesian military judiciary jurisdiction over military members committing general crimes is based on national legal system and global development while taking into account the structure, composition and position of military forces within the state structure. Jurisdiction is formulated by setting forth the substantive norm of crimes that are military in nature. At the level of adjudication jurisdiction, the military court has a jurisdiction to adjudicate military personnel who commit general offences. At the level of execution jurisdiction, the military court has a jurisdiction to execute the sentence against military members who have been convicted with a final and binding judgement. Military court's jurisdiction needs to be expanded when the country is in the state of war.

Keywords: Military courts, Jurisdiction, Military members, Military justice system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2438
226 Liability Aspects Related to Genetically Modified Food under the Food Safety Legislation in India

Authors: S. K. Balashanmugam, Padmavati Manchikanti, S. R. Subramanian

Abstract:

The question of legal liability over injury arising out of the import and the introduction of GM food emerges as a crucial issue confronting to promote GM food and its derivatives. There is a greater possibility of commercialized GM food from the exporting country to enter importing country where status of approval shall not be same. This necessitates the importance of fixing a liability mechanism to discuss the damage, if any, occurs at the level of transboundary movement or at the market. There was a widespread consensus to develop the Cartagena Protocol on Biosafety and to give for a dedicated regime on liability and redress in the form of Nagoya Kuala Lumpur Supplementary Protocol on the Liability and Redress (‘N-KL Protocol’) at the international context. The national legal frameworks based on this protocol are not adequately established in the prevailing food legislations of the developing countries. The developing economy like India is willing to import GM food and its derivatives after the successful commercialization of Bt Cotton in 2002. As a party to the N-KL Protocol, it is indispensable for India to formulate a legal framework and to discuss safety, liability, and regulatory issues surrounding GM foods in conformity to the provisions of the Protocol. The liability mechanism is also important in the case where the risk assessment and risk management is still in implementing stage. Moreover, the country is facing GM infiltration issues with its neighbors Bangladesh. As a precautionary approach, there is a need to formulate rules and procedure of legal liability to discuss any kind of damage occurs at transboundary trade. In this context, the proposed work will attempt to analyze the liability regime in the existing Food Safety and Standards Act, 2006 from the applicability and domestic compliance and to suggest legal and policy options for regulatory authorities.

Keywords: Commercialisation, food safety, FSSAI, genetically modified foods, India, liability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
225 Appropriate Technology: Revisiting the Movement in Developing Countries for Sustainability

Authors: Jayshree Patnaik, Bhaskar Bhowmick

Abstract:

The economic growth of any nation is steered and dependent on innovation in technology. It can be preferably argued that technology has enhanced the quality of life. Technology is linked both with an economic and a social structure. But there are some parts of the world or communities which are yet to reap the benefits of technological innovation. Business and organizations are now well equipped with cutting-edge innovations that improve the firm performance and provide them with a competitive edge, but rarely does it have a positive impact on any community which is weak and marginalized. In recent times, it is observed that communities are actively handling social or ecological issues with the help of indigenous technologies. Thus, "Appropriate Technology" comes into the discussion, which is quite prevalent in the rural third world. Appropriate technology grew as a movement in the mid-1970s during the energy crisis, but it lost its stance in the following years when people started it to describe it as an inferior technology or dead technology. Basically, there is no such technology which is inferior or sophisticated for a particular region. The relevance of appropriate technology lies in penetrating technology into a larger and weaker section of community where the “Bottom of the pyramid” can pay for technology if they find the price is affordable. This is a theoretical paper which primarily revolves around how appropriate technology has faded and again evolved in both developed and developing countries. The paper will try to focus on the various concepts, history and challenges faced by the appropriate technology over the years. Appropriate technology follows a documented approach but lags in overall design and diffusion. Diffusion of technology into the poorer sections of community remains unanswered until the present time. Appropriate technology is multi-disciplinary in nature; therefore, this openness allows having a varied working model for different problems. Appropriate technology is a friendly technology that seeks to improve the lives of people in a constraint environment by providing an affordable and sustainable solution. Appropriate technology needs to be defined in the era of modern technological advancement for sustainability.

Keywords: Appropriate technology, community, developing country, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
224 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
223 Carbamazepine Co-crystal Screening with Dicarboxylic Acids Co-Crystal Formers

Authors: Syarifah Abd Rahim, Fatinah Ab Rahman, Engku N. E. M. Nasir, Noor A. Ramle

Abstract:

Co-crystal is believed to improve the solubility and dissolution rates and thus, enhanced the bioavailability of poor water soluble drugs particularly during the oral route of administration. With the existing of poorly soluble drugs in pharmaceutical industry, the screening of co-crystal formation using carbamazepine (CBZ) as a model drug compound with dicarboxylic acids co-crystal formers (CCF) namely fumaric (FA) and succinic (SA) acids in ethanol has been studied. The co-crystal formations were studied by varying the mol ratio values of CCF to CBZ to access the effect of CCF concentration on the formation of the co-crystal. Solvent evaporation, slurry and cooling crystallization which representing the solution based method co-crystal screening were used. Based on the differential scanning calorimetry (DSC) analysis, the melting point of CBZ-SA in different ratio was in the range between 188oC-189oC. For CBZ-FA form A and CBZ-FA form B the melting point in different ratio were in the range of 174oC-175oC and 185oC-186oC respectively. The product crystal from the screening was also characterized using X-ray powder diffraction (XRPD). The XRPD pattern profile analysis has shown that the CBZ co-crystals with FA and SA were successfully formed for all ratios studied. The findings revealed that CBZ-FA co-crystal were formed in two different polymorphs. It was found that CBZ-FA form A and form B were formed from evaporation and slurry crystallization methods respectively. On the other hand, in cooling crystallization method, CBZ-FA form A was formed at lower mol ratio of CCF to CBZ and vice versa. This study disclosed that different methods and mol ratios during the co-crystal screening can affect the outcome of co-crystal produced such as polymorphic forms of co-crystal and thereof. Thus, it was suggested that careful attentions is needed during the screening since the co-crystal formation is currently one of the promising approach to be considered in research and development for pharmaceutical industry to improve the poorly soluble drugs.

Keywords: Carbamazepine, co-crystal, co-crystal former, dicarboxylic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2909
222 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
221 A Face-to-Face Education Support System Capable of Lecture Adaptation and Q&A Assistance Based On Probabilistic Inference

Authors: Yoshitaka Fujiwara, Jun-ichirou Fukushima, Yasunari Maeda

Abstract:

Keys to high-quality face-to-face education are ensuring flexibility in the way lectures are given, and providing care and responsiveness to learners. This paper describes a face-to-face education support system that is designed to raise the satisfaction of learners and reduce the workload on instructors. This system consists of a lecture adaptation assistance part, which assists instructors in adapting teaching content and strategy, and a Q&A assistance part, which provides learners with answers to their questions. The core component of the former part is a “learning achievement map", which is composed of a Bayesian network (BN). From learners- performance in exercises on relevant past lectures, the lecture adaptation assistance part obtains information required to adapt appropriately the presentation of the next lecture. The core component of the Q&A assistance part is a case base, which accumulates cases consisting of questions expected from learners and answers to them. The Q&A assistance part is a case-based search system equipped with a search index which performs probabilistic inference. A prototype face-to-face education support system has been built, which is intended for the teaching of Java programming, and this approach was evaluated using this system. The expected degree of understanding of each learner for a future lecture was derived from his or her performance in exercises on past lectures, and this expected degree of understanding was used to select one of three adaptation levels. A model for determining the adaptation level most suitable for the individual learner has been identified. An experimental case base was built to examine the search performance of the Q&A assistance part, and it was found that the rate of successfully finding an appropriate case was 56%.

Keywords: Bayesian network, face-to-face education, lecture adaptation, Q&A assistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1358
220 The Social Dynamics of Pandemics: A Clinical Sociological Analysis of Precautions and Risks

Authors: C. Ardil

Abstract:

The COVID-19 pandemic has revealed the complex and multifaceted relationship between societal structures and public health, emphasizing the need for a holistic approach to understanding pandemic responses. This study utilizes a clinical sociological perspective to analyze the social impacts of pandemics, with a particular focus on how social determinants such as income, education, race, and geographical location influence vulnerability and resilience. It explores the critical role of risk perception, communication strategies, and community dynamics in shaping public adherence to precautionary measures like mask-wearing, social distancing, and vaccination. By examining the ways in which social norms, structural inequalities, and trust in institutions affect public behavior, this study provides insights into the challenges of managing health crises in diverse communities. Comparative case studies and policy analysis are employed to highlight the variations in pandemic responses across different countries and regions, illustrating the importance of coordinated strategies and community-based interventions. The findings underscore that effective pandemic response requires addressing underlying social inequities, fostering community cohesion, and ensuring equitable access to healthcare and information. This study contributes to a deeper understanding of the broader societal implications of pandemics and offers recommendations for building more resilient, inclusive public health systems capable of mitigating the impact of future global health emergencies.

Keywords: Behavioral medicine, clinical sociology, community health, COVID-19, COVID-19 pandemic, epidemiology, infectious diseases, pandemics, precautions, psychology, public health, risks, social determinants, social dynamics, social psychiatry, social psychology, socioeconomic status, structural functionalism

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33
219 Turbine Follower Control Strategy Design Based on Developed FFPP Model

Authors: Ali Ghaffari, Mansour Nikkhah Bahrami, Hesam Parsa

Abstract:

In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.

Keywords: Attemperator, Evolutionary algorithms, Fossil fuelled power plant (FFPP), Fuzzy set theory, Gain scheduling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
218 A Post Keynesian Environmental Macroeconomic Model for Agricultural Water Sustainability under Climate Change in the Murray-Darling Basin, Australia

Authors: Ke Zhao, Ballarat Colin Richardson, Jerry Courvisanos, John Crawford

Abstract:

Climate change has profound consequences for the agriculture of south-eastern Australia and its climate-induced water shortage in the Murray-Darling Basin. Post Keynesian Economics (PKE) macro-dynamics, along with Kaleckian investment and growth theory, are used to develop an ecological-economic system dynamics model of this complex nonlinear river basin system. The Murray- Darling Basin Simulation Model (MDB-SM) uses the principles of PKE to incorporate the fundamental uncertainty of economic behaviors of farmers regarding the investments they make and the climate change they face, particularly as regards water ecosystem services. MDB-SM provides a framework for macroeconomic policies, especially for long-term fiscal policy and for policy directed at the sustainability of agricultural water, as measured by socio-economic well-being considerations, which include sustainable consumption and investment in the river basin. The model can also reproduce other ecological and economic aspects and, for certain parameters and initial values, exhibit endogenous business cycles and ecological sustainability with realistic characteristics. Most importantly, MDBSM provides a platform for the analysis of alternative economic policy scenarios. These results reveal the importance of understanding water ecosystem adaptation under climate change by integrating a PKE macroeconomic analytical framework with the system dynamics modelling approach. Once parameterised and supplied with historical initial values, MDB-SM should prove to be a practical tool to provide alternative long-term policy simulations of agricultural water and socio-economic well-being.

Keywords: Agricultural water, Macroeconomic dynamics, Modeling, Investment dynamics, Sustainability, Unemployment, Economics, Keynesian, Kaleckian.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
217 A Temporal QoS Ontology for ERTMS/ETCS

Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien

Abstract:

Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are presented.

Keywords: System Requirement Specification, ERTMS/ETCS, Temporal Ontologies, Domain Ontologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135
216 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.

Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927
215 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
214 Design and Development of Constant Stress Composite Cantilever Beam

Authors: Vinod B. Suryawanshi, Ajit D. Kelkar

Abstract:

Composite materials, due to their unique properties such as high strength to weight ratio, corrosion resistance, and impact resistance have huge potential as structural materials in automotive, construction and transportation applications. However, these properties often come at higher cost owing to complex design methods, difficult manufacturing processes and raw material cost. Traditionally, tapered laminated composite structures are manufactured using autoclave manufacturing process by ply drop off technique. Autoclave manufacturing though very powerful suffers from high capital investment and higher energy consumption. As per the current trends in composite manufacturing, Out of Autoclave (OoA) processes are looked as emerging technologies for manufacturing the structural composite components for aerospace and defense applications. However, there is a need for improvement among these processes to make them reliable and consistent. In this paper, feasibility of using out of autoclave process to manufacture the variable thickness cantilever beam is discussed. The minimum weight design for the composite beam is obtained using constant stress beam concept by tailoring the thickness of the beam. Ply drop off techniques was used to fabricate the variable thickness beam from glass/epoxy prepregs. Experiments were conducted to measure bending stresses along the span of the cantilever beam at different intervals by applying the concentrated load at the free end. Experimental results showed that the stresses in the bean at different intervals were constant. This proves the ability of OoA process to manufacture the constant stress beam. Finite element model for the constant stress beam was developed using commercial finite element simulation software. It was observed that the simulation results agreed very well with the experimental results and thus validated design and manufacturing approach used.

Keywords: Beams, Composites, Constant Stress, Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4393
213 Knowledge Management Strategies within a Corporate Environment of Papers

Authors: Daniel J. Glauber

Abstract:

Knowledge transfer between personnel could benefit an organization’s improved competitive advantage in the marketplace from a strategic approach to knowledge management. The lack of information sharing between personnel could create knowledge transfer gaps while restricting the decision-making processes. Knowledge transfer between personnel can potentially improve information sharing based on an implemented knowledge management strategy. An organization’s capacity to gain more knowledge is aligned with the organization’s prior or existing captured knowledge. This case study attempted to understand the overall influence of a KMS within the corporate environment and knowledge exchange between personnel. The significance of this study was to help understand how organizations can improve the Return on Investment (ROI) of a knowledge management strategy within a knowledge-centric organization. A qualitative descriptive case study was the research design selected for this study. The lack of information sharing between personnel may create knowledge transfer gaps while restricting the decision-making processes. Developing a knowledge management strategy acceptable at all levels of the organization requires cooperation in support of a common organizational goal. Working with management and executive members to develop a protocol where knowledge transfer becomes a standard practice in multiple tiers of the organization. The knowledge transfer process could be measurable when focusing on specific elements of the organizational process, including personnel transition to help reduce time required understanding the job. The organization studied in this research acknowledged the need for improved knowledge management activities within the organization to help organize, retain, and distribute information throughout the workforce. Data produced from the study indicate three main themes including information management, organizational culture, and knowledge sharing within the workforce by the participants. These themes indicate a possible connection between an organizations KMS, the organizations culture, knowledge sharing, and knowledge transfer.

Keywords: Knowledge management strategies, knowledge transfer, knowledge management, knowledge capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
212 Multi-Objective Optimization of Gas Turbine Power Cycle

Authors: Mohsen Nikaein

Abstract:

Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.

Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2526
211 Personnel Selection Based on Step-Wise Weight Assessment Ratio Analysis and Multi-Objective Optimization on the Basis of Ratio Analysis Methods

Authors: Emre Ipekci Cetin, Ebru Tarcan Icigen

Abstract:

Personnel selection process is considered as one of the most important and most difficult issues in human resources management. At the stage of personnel selection, the applicants are handled according to certain criteria, the candidates are dealt with, and efforts are made to select the most appropriate candidate. However, this process can be more complicated in terms of the managers who will carry out the staff selection process. Candidates should be evaluated according to different criteria such as work experience, education, foreign language level etc. It is crucial that a rational selection process is carried out by considering all the criteria in an integrated structure. In this study, the problem of choosing the front office manager of a 5 star accommodation enterprise operating in Antalya is addressed by using multi-criteria decision-making methods. In this context, SWARA (Step-wise weight assessment ratio analysis) and MOORA (Multi-Objective Optimization on the basis of ratio analysis) methods, which have relatively few applications when compared with other methods, have been used together. Firstly SWARA method was used to calculate the weights of the criteria and subcriteria that were determined by the business. After the weights of the criteria were obtained, the MOORA method was used to rank the candidates using the ratio system and the reference point approach. Recruitment processes differ from sector to sector, from operation to operation. There are a number of criteria that must be taken into consideration by businesses in accordance with the structure of each sector. It is of utmost importance that all candidates are evaluated objectively in the framework of these criteria, after these criteria have been carefully selected in the selection of suitable candidates for employment. In the study, staff selection process was handled by using SWARA and MOORA methods together.

Keywords: Accommodation establishments, human resource management, MOORA, multi criteria decision making, SWARA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
210 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
209 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
208 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka

Authors: H. M. N. L. Handagiripathira

Abstract:

The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.

Keywords: Gamma spectrometry, lagoon, radioactivity, sediments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 524
207 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1223
206 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
205 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: Drilling of Composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
204 Parameters Influencing Human-Machine Interaction in Hospitals

Authors: Hind Bouami, Patrick Millot

Abstract:

Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedback helps identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled. 

Keywords: Life-critical systems, situation awareness, human-machine interaction, decision-making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 575
203 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: Food (Ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
202 Partnering with Stakeholders to Secure Digitization of Water

Authors: Sindhu Govardhan, Kenneth G. Crowther

Abstract:

Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.

Keywords: Cyber security, shared responsibility, IIOT, threat modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169
201 Millennial Teachers of Canada: Innovation within the Boxed-In Constraints of Tradition

Authors: Lena Shulyakovskaya

Abstract:

Every year, schools aim to develop and adopt new technology and pedagogy as a way to equip today's students with the needed 21st Century skills. However, the field of primary and secondary education may not be as open to embracing change in reality. Despite the drive to reform and innovation, the field of education in Canada is still very much steeped in tradition and uses many of the practices that came into effect over 50 years ago. Among those are employment and retention practices. Millennials are the youngest generation of professionals entering the workplace at this time and the ones leaving their jobs within just a few years. Almost half of new teachers leave Canadian schools within their first five years on the job. This paper discusses one of the contributing factors that lead Canadian millennial teachers to either leave or stay in the profession - standardized education system. Using an exploratory case study approach, in-depth interviews with former and current millennial teachers were conducted to learn about their experiences within the K-12 system. Among the findings were the young teachers' concerns about the constant changes to teaching practices and technological implementations that claimed to advance teaching and learning, and yet in reality only disguised and reiterated the same traditional, outdated, and standardized practices that already existed. Furthermore, while many millennial teachers aspired to be innovative with their curriculum and teaching practices, they felt trapped and helpless in the hands of school leaders who were very reluctant to change. While many new program ideas and technological advancements are being made openly available to teachers on a regular basis, it is important to consider the education field as a whole and how it plays into the teachers' ability to realistically implement changes. By the year 2025, millennials will make up approximately 75% of the North American workforce. It is important to examine generational differences among teachers and understand how millennial teachers may be shaping the future of primary and secondary schools, either by staying or leaving the profession.

Keywords: 21st century skills, millennials, teacher attrition, tradition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1098