Search results for: intelligent techniques
5706 Intelligent Chemistry Approach to Improvement of Oxygenates Analytical Method in Light Hydrocarbon by Multidimensional Gas Chromatography - FID and MS
Authors: Ahmed Aboforn
Abstract:
Butene-1 product is consider effectively raw material in Polyethylene production, however Oxygenates impurities existing will be effected ethylene/butene-1 copolymers synthesized through titanium-magnesium-supported Ziegler-Natta catalysts. Laterally, Petrochemical industries are challenge against poor quality of Butene-1 and other C4 mix – feedstock that reflected on business impact and production losing. In addition, propylene product suffering from contamination by oxygenates components and causing for lose production and plant upset of Polypropylene process plants. However, Multidimensional gas chromatography (MDGC) innovative analytical methodology is a chromatography technique used to separate complex samples, as mixing different functional group as Hydrocarbon and oxygenates compounds and have similar retention factors, by running the eluent through two or more columns instead of the customary single column. This analytical study striving to enhance the quality of Oxygenates analytical method, as monitoring the concentration of oxygenates with accurate and precise analytical method by utilizing multidimensional GC supported by Backflush technique and Flame Ionization Detector, which have high performance separation of hydrocarbon and Oxygenates; also improving the minimum detection limits (MDL) to detect the concentration <1.0 ppm. However different types of oxygenates as (Alcohols, Aldehyde, Ketones, Ester and Ether) may be determined in other Hydrocarbon streams asC3, C4-mix, until C12 mixture, supported by liquid injection auto-sampler.Keywords: analytical chemistry, gas chromatography, petrochemicals, oxygenates
Procedia PDF Downloads 835705 Enhancing Project Performance Forecasting using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
Accurate forecasting of project performance metrics is crucial for successfully managing and delivering urban road reconstruction projects. Traditional methods often rely on static baseline plans and fail to consider the dynamic nature of project progress and external factors. This research proposes a machine learning-based approach to forecast project performance metrics, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category in an urban road reconstruction project. The proposed model utilizes time series forecasting techniques, including Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance based on historical data and project progress. The model also incorporates external factors, such as weather patterns and resource availability, as features to enhance the accuracy of forecasts. By applying the predictive power of machine learning, the performance forecasting model enables proactive identification of potential deviations from the baseline plan, which allows project managers to take timely corrective actions. The research aims to validate the effectiveness of the proposed approach using a case study of an urban road reconstruction project, comparing the model's forecasts with actual project performance data. The findings of this research contribute to the advancement of project management practices in the construction industry, offering a data-driven solution for improving project performance monitoring and control.Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, earned value management
Procedia PDF Downloads 495704 Damage to Strawberries Caused by Simulated Transport
Authors: G. La Scalia, M. Enea, R. Micale, O. Corona, L. Settanni
Abstract:
The quality and condition of perishable products delivered to the market and their subsequent selling prices are directly affected by the care taken during harvesting and handling. Mechanical injury, in fact, occurs at all stages, from pre-harvest operations through post-harvest handling, packing and transport to the market. The main implications of this damage are the reduction of the product’s quality and economical losses related to the shelf life diminution. For most perishable products, the shelf life is relatively short and it is typically dictated by microbial growth related to the application of dynamic and static loads during transportation. This paper presents the correlation between vibration levels and microbiological growth on strawberries and woodland strawberries and detects the presence of volatile organic compounds (VOC) in order to develop an intelligent logistic unit capable of monitoring VOCs using a specific sensor system. Fresh fruits were exposed to vibrations by means of a vibrating table in a temperature-controlled environment. Microbiological analyses were conducted on samples, taken at different positions along the column of the crates. The values obtained were compared with control samples not exposed to vibrations and the results show that different positions along the column influence the development of bacteria, yeasts and filamentous fungi.Keywords: microbiological analysis, shelf life, transport damage, volatile organic compounds
Procedia PDF Downloads 4255703 Intelligent Minimal Allocation of Capacitors in Distribution Networks Using Genetic Algorithm
Authors: S. Neelima, P. S. Subramanyam
Abstract:
A distribution system is an interface between the bulk power system and the consumers. Among these systems, radial distributions system is popular because of low cost and simple design. In distribution systems, the voltages at buses reduces when moved away from the substation, also the losses are high. The reason for a decrease in voltage and high losses is the insufficient amount of reactive power, which can be provided by the shunt capacitors. But the placement of the capacitor with an appropriate size is always a challenge. Thus, the optimal capacitor placement problem is to determine the location and size of capacitors to be placed in distribution networks in an efficient way to reduce the power losses and improve the voltage profile of the system. For this purpose, in this paper, two stage methodologies are used. In the first stage, the load flow of pre-compensated distribution system is carried out using ‘dimension reducing distribution load flow algorithm (DRDLFA)’. On the basis of this load flow the potential locations of compensation are computed. In the second stage, Genetic Algorithm (GA) technique is used to determine the optimal location and size of the capacitors such that the cost of the energy loss and capacitor cost to be a minimum. The above method is tested on IEEE 9 and 34 bus system and compared with other methods in the literature.Keywords: dimension reducing distribution load flow algorithm, DRDLFA, genetic algorithm, electrical distribution network, optimal capacitors placement, voltage profile improvement, loss reduction
Procedia PDF Downloads 3905702 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties
Authors: P. T. Collett, M. Kershaw
Abstract:
Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation
Procedia PDF Downloads 1275701 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation
Authors: Jonah Kenei, Elisha Opiyo
Abstract:
Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.Keywords: classification, electronic health records, narrative texts, visualization
Procedia PDF Downloads 1185700 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator
Procedia PDF Downloads 2505699 Intelligent Agent-Based Model for the 5G mmWave O2I Technology Adoption
Authors: Robert Joseph M. Licup
Abstract:
The deployment of the fifth-generation (5G) mobile system through mmWave frequencies is the new solution in the requirement to provide higher bandwidth readily available for all users. The usage pattern of the mobile users has moved towards either the work from home or online classes set-up because of the pandemic. Previous mobile technologies can no longer meet the high speed, and bandwidth requirement needed, given the drastic shift of transactions to the home. The millimeter-wave (mmWave) underutilized frequency is utilized by the fifth-generation (5G) cellular networks that support multi-gigabit-per-second (Gbps) transmission. However, due to its short wavelengths, high path loss, directivity, blockage sensitivity, and narrow beamwidth are some of the technical challenges that need to be addressed. Different tools, technologies, and scenarios are explored to support network design, accurate channel modeling, implementation, and deployment effectively. However, there is a big challenge on how the consumer will adopt this solution and maximize the benefits offered by the 5G Technology. This research proposes to study the intricacies of technology diffusion, individual attitude, behaviors, and how technology adoption will be attained. The agent based simulation model shaped by the actual applications, technology solution, and related literature was used to arrive at a computational model. The research examines the different attributes, factors, and intricacies that can affect each identified agent towards technology adoption.Keywords: agent-based model, AnyLogic, 5G O21, 5G mmWave solutions, technology adoption
Procedia PDF Downloads 1085698 Assessment of Work-Related Stress and Its Predictors in Ethiopian Federal Bureau of Investigation in Addis Ababa
Authors: Zelalem Markos Borko
Abstract:
Work-related stress is a reaction that occurs when the work weight progress toward becoming excessive. Therefore, unless properly managed, stress leads to high employee turnover, decreased performance, illness and absenteeism. Yet, little has been addressed regarding work-related stress and its predictors in the study area. Therefore, the objective of this study was to assess stress prevalence and its predictors in the study area. To that effect, a cross-sectional study design was conducted on 281 employees from the Ethiopian Federal Bureau of Investigation by using stratified random sampling techniques. Survey questionnaire scales were employed to collect data. Data were analyzed by percentage, Pearson correlation coefficients, simple linear regression, multiple linear regressions, independent t-test and one-way ANOVA statistical techniques. In the present study13.9% of participants faced high stress, whereas 13.5% of participants faced low stress and the rest 72.6% of officers experienced moderate stress. There is no significant group difference among workers due to age, gender, marital status, educational level, years of service and police rank. This study concludes that factors such as role conflict, performance over-utilization, role ambiguity, and qualitative and quantitative role overload together predict 39.6% of work-related stress. This indicates that 60.4% of the variation in stress is explained by other factors, so other additional research should be done to identify additional factors predicting stress. To prevent occupational stress among police, the Ethiopian Federal Bureau of Investigation should develop strategies based on factors that will help to develop stress reduction management.Keywords: work-related stress, Ethiopian federal bureau of investigation, predictors, Addis Ababa
Procedia PDF Downloads 705697 Characterization and Monitoring of the Yarn Faults Using Diametric Fault System
Authors: S. M. Ishtiaque, V. K. Yadav, S. D. Joshi, J. K. Chatterjee
Abstract:
The DIAMETRIC FAULTS system has been developed that captures a bi-directional image of yarn continuously in sequentially manner and provides the detailed classification of faults. A novel mathematical framework developed on the acquired bi-directional images forms the basis of fault classification in four broad categories, namely, Thick1, Thick2, Thin and Normal Yarn. A discretised version of Radon transformation has been used to convert the bi-directional images into one-dimensional signals. Images were divided into training and test sample sets. Karhunen–Loève Transformation (KLT) basis is computed for the signals from the images in training set for each fault class taking top six highest energy eigen vectors. The fault class of the test image is identified by taking the Euclidean distance of its signal from its projection on the KLT basis for each sample realization and fault class in the training set. Euclidean distance applied using various techniques is used for classifying an unknown fault class. An accuracy of about 90% is achieved in detecting the correct fault class using the various techniques. The four broad fault classes were further sub classified in four sub groups based on the user set boundary limits for fault length and fault volume. The fault cross-sectional area and the fault length defines the total volume of fault. A distinct distribution of faults is found in terms of their volume and physical dimensions which can be used for monitoring the yarn faults. It has been shown from the configurational based characterization and classification that the spun yarn faults arising out of mass variation, exhibit distinct characteristics in terms of their contours, sizes and shapes apart from their frequency of occurrences.Keywords: Euclidean distance, fault classification, KLT, Radon Transform
Procedia PDF Downloads 2655696 Spatial Assessment of Creek Habitats of Marine Fish Stock in Sindh Province
Authors: Syed Jamil H. Kazmi, Faiza Sarwar
Abstract:
The Indus delta of Sindh Province forms the largest creeks zone of Pakistan. The Sindh coast starts from the mouth of Hab River and terminates at Sir Creek area. In this paper, we have considered the major creeks from the site of Bin Qasim Port in Karachi to Jetty of Keti Bunder in Thatta District. A general decline in the mangrove forest has been observed that within a span of last 25 years. The unprecedented human interventions damage the creeks habitat badly which includes haphazard urban development, industrial and sewage disposal, illegal cutting of mangroves forest, reduced and inconsistent fresh water flow mainly from Jhang and Indus rivers. These activities not only harm the creeks habitat but affected the fish stock substantially. Fishing is the main livelihood of coastal people but with the above-mentioned threats, it is also under enormous pressure by fish catches resulted in unchecked overutilization of the fish resources. This pressure is almost unbearable when it joins with deleterious fishing methods, uncontrolled fleet size, increase trash and by-catch of juvenile and illegal mesh size. Along with these anthropogenic interventions study area is under the red zone of tropical cyclones and active seismicity causing floods, sea intrusion, damage mangroves forests and devastation of fish stock. In order to sustain the natural resources of the Indus Creeks, this study was initiated with the support of FAO, WWF and NIO, the main purpose was to develop a Geo-Spatial dataset for fish stock assessment. The study has been spread over a year (2013-14) on monthly basis which mainly includes detailed fish stock survey, water analysis and few other environmental analyses. Environmental analysis also includes the habitat classification of study area which has done through remote sensing techniques for 22 years’ time series (1992-2014). Furthermore, out of 252 species collected, fifteen species from estuarine and marine groups were short-listed to measure the weight, health and growth of fish species at each creek under GIS data through SPSS system. Furthermore, habitat suitability analysis has been conducted by assessing the surface topographic and aspect derivation through different GIS techniques. The output variables then overlaid in GIS system to measure the creeks productivity. Which provided the results in terms of subsequent classes: extremely productive, highly productive, productive, moderately productive and less productive. This study has revealed the Geospatial tools utilization along with the evaluation of the fisheries resources and creeks habitat risk zone mapping. It has also been identified that the geo-spatial technologies are highly beneficial to identify the areas of high environmental risk in Sindh Creeks. This has been clearly discovered from this study that creeks with high rugosity are more productive than the creeks with low levels of rugosity. The study area has the immense potential to boost the economy of Pakistan in terms of fish export, if geo-spatial techniques are implemented instead of conventional techniques.Keywords: fish stock, geo-spatial, productivity analysis, risk
Procedia PDF Downloads 2455695 Design of Smart Urban Lighting by Using Social Sustainability Approach
Authors: Mohsen Noroozi, Maryam Khalili
Abstract:
Creating cities, objects and spaces that are economically, environmentally and socially sustainable and which meet the challenge of social interaction and generation change will be one of the biggest tasks of designers. Social sustainability is about how individuals, communities and societies live with each other and set out to achieve the objectives of development model which they have chosen for themselves. Urban lightning as one of the most important elements of urban furniture that people constantly interact with it in public spaces; can be a significant object for designers. Using intelligence by internet of things for urban lighting makes it more interactive in public environments. It can encourage individuals to carry out appropriate behaviors and provides them the social awareness through new interactions. The greatest strength of this technology is its strong impact on many aspects of everyday life and users' behaviors. The analytical phase of the research is based on a multiple method survey strategy. Smart lighting proposed in this paper is an urban lighting designed on results obtained from a collective point of view about the social sustainability. In this paper, referring to behavioral design methods, the social behaviors of the people has been studied. Data show that people demands for a deeper experience of social participation, safety perception and energy saving with the meaningful use of interactive and colourful lighting effects. By using intelligent technology, some suggestions are provided in the field of future lighting to consider the new forms of social sustainability.Keywords: behavior pattern, internet of things, social sustainability, urban lighting
Procedia PDF Downloads 1945694 Utility of Geospatial Techniques in Delineating Groundwater-Dependent Ecosystems in Arid Environments
Authors: Mangana B. Rampheri, Timothy Dube, Farai Dondofema, Tatenda Dalu
Abstract:
Identifying and delineating groundwater-dependent ecosystems (GDEs) is critical to the well understanding of the GDEs spatial distribution as well as groundwater allocation. However, this information is inadequately understood due to limited available data for the most area of concerns. Thus, this study aims to address this gap using remotely sensed, analytical hierarchy process (AHP) and in-situ data to identify and delineate GDEs in Khakea-Bray Transboundary Aquifer. Our study developed GDEs index, which integrates seven explanatory variables, namely, Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (MNDWI), Land-use and landcover (LULC), slope, Topographic Wetness Index (TWI), flow accumulation and curvature. The GDEs map was delineated using the weighted overlay tool in ArcGIS environments. The map was spatially classified into two classes, namely, GDEs and Non-GDEs. The results showed that only 1,34 % (721,91 km2) of the area is characterised by GDEs. Finally, groundwater level (GWL) data was used for validation through correlation analysis. Our results indicated that: 1) GDEs are concentrated at the northern, central, and south-western part of our study area, and 2) the validation results showed that GDEs classes do not overlap with GWL located in the 22 boreholes found in the given area. However, the results show a possible delineation of GDEs in the study area using remote sensing and GIS techniques along with AHP. The results of this study further contribute to identifying and delineating priority areas where appropriate water conservation programs, as well as strategies for sustainable groundwater development, can be implemented.Keywords: analytical hierarchy process (AHP), explanatory variables, groundwater-dependent ecosystems (GDEs), khakea-bray transboundary aquifer, sentinel-2
Procedia PDF Downloads 1085693 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 1255692 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1225691 The Effect of Incorporating Animal Assisted Interventions with Trauma Focused Cognitive Behavioral Therapy
Authors: Kayla Renteria
Abstract:
This study explored the role animal-assisted psychotherapy (AAP) can play in treating Post-Traumatic Stress Disorder (PTSD) when incorporated into Trauma-informed cognitive behavioral therapy (TF-CBT). A review of the literature was performed to show how incorporating AAP could benefit TF-CBT since this treatment model often presents difficulties, such as client motivation and avoidance of the exposure element of the intervention. In addition, the fluidity of treatment goals during complex trauma cases was explored, as this issue arose in the case study. This study follows the course of treatment of a 12-year-old female presenting with symptoms of PTSD. Treatment consisted of traditional components of the TF-CBT model, with the added elements of AAP to address typical treatment obstacles in TF-CBT. A registered therapy dog worked with the subject in all sessions throughout her treatment. The therapy dog was incorporated into components such as relaxation and coping techniques, narrative therapy techniques, and psychoeducation on the cognitive triangle. Throughout the study, the client’s situation and clinical needs required the therapist to switch goals to focus on current safety and stability. The therapy dog provided support and neurophysiological benefits to the client through AAP during this shift in treatment. The client was assessed quantitatively using the Child PTSD Symptom Scale Self Report for DSM-5 (CPSS-SR-5) before and after therapy and qualitatively through a feedback form given after treatment. The participant showed improvement in CPSS-SR-V scores, and she reported that the incorporation of the therapy animal improved her therapy. The results of this study show how the use of AAP provided the client a solid, consistent relationship with the therapy dog that supported her through processing various types of traumas. Implications of the results of treatment and for future research are discussed.Keywords: animal-assisted therapy, trauma-focused cognitive behavioral therapy, PTSD in children, trauma treatment
Procedia PDF Downloads 2185690 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory
Authors: Roy. H. A. Lindelauf
Abstract:
Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques
Procedia PDF Downloads 1405689 Towards a Smart Irrigation System Based on Wireless Sensor Networks
Authors: Loubna Hamami, Bouchaib Nassereddine
Abstract:
Due to the evolution of technologies, the need to observe and manage hostile environments, and reduction in size, wireless sensor networks (WSNs) are becoming essential and implicated in the most fields of life. WSNs enable us to change the style of living, working and interacting with the physical environment. The agricultural sector is one of such sectors where WSNs are successfully used to get various benefits. For successful agricultural production, the irrigation system is one of the most important factors, and it plays a tactical role in the process of agriculture domain. However, it is considered as the largest consumer of freshwater. Besides, the scarcity of water, the drought, the waste of the limited available water resources are among the critical issues that touch the almost sectors, notably agricultural services. These facts are leading all governments around the world to rethink about saving water and reducing the volume of water used; this requires the development of irrigation practices in order to have a complete and independent system that is more efficient in the management of irrigation. Consequently, the selection of WSNs in irrigation system has been a benefit for developing the agriculture sector. In this work, we propose a prototype for a complete and intelligent irrigation system based on wireless sensor networks and we present and discuss the design of this prototype. This latter aims at saving water, energy and time. The proposed prototype controls water system for irrigation by monitoring the soil temperature, soil moisture and weather conditions for estimation of water requirements of each plant.Keywords: precision irrigation, sensor, wireless sensor networks, water resources
Procedia PDF Downloads 1535688 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression
Authors: J. S. Saini, P. P. K. Sandhu
Abstract:
The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control
Procedia PDF Downloads 3385687 E-Learning Recommender System Based on Collaborative Filtering and Ontology
Authors: John Tarus, Zhendong Niu, Bakhti Khadidja
Abstract:
In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.Keywords: collaborative filtering, e-learning, ontology, recommender system
Procedia PDF Downloads 3805686 Management of Cultural Heritage: Bologna Gates
Authors: Alfonso Ippolito, Cristiana Bartolomei
Abstract:
A growing demand is felt today for realistic 3D models enabling the cognition and popularization of historical-artistic heritage. Evaluation and preservation of Cultural Heritage is inextricably connected with the innovative processes of gaining, managing, and using knowledge. The development and perfecting of techniques for acquiring and elaborating photorealistic 3D models, made them pivotal elements for popularizing information of objects on the scale of architectonic structures.Keywords: cultural heritage, databases, non-contact survey, 2D-3D models
Procedia PDF Downloads 4235685 Sustainability in Retaining Wall Construction with Geosynthetics
Authors: Sateesh Kumar Pisini, Swetha Priya Darshini, Sanjay Kumar Shukla
Abstract:
This paper seeks to present a research study on sustainability in construction of retaining wall using geosynthetics. Sustainable construction is a way for the building and infrastructure industry to move towards achieving sustainable development, taking into account environmental, socioeconomic and cultural issues. Geotechnical engineering, being very resource intensive, warrants an environmental sustainability study, but a quantitative framework for assessing the sustainability of geotechnical practices, particularly at the planning and design stages, does not exist. In geotechnical projects, major economic issues to be addressed are in the design and construction of stable slopes and retaining structures within space constraints. In this paper, quantitative indicators for assessing the environmental sustainability of retaining wall with geosynthetics are compared with conventional concrete retaining wall through life cycle assessment (LCA). Geosynthetics can make a real difference in sustainable construction techniques and contribute to development in developing countries in particular. Their imaginative application can result in considerable cost savings over the use of conventional designs and materials. The acceptance of geosynthetics in reinforced retaining wall construction has been triggered by a number of factors, including aesthetics, reliability, simple construction techniques, good seismic performance, and the ability to tolerate large deformations without structural distress. Reinforced retaining wall with geosynthetics is the best cost-effective and eco-friendly solution as compared with traditional concrete retaining wall construction. This paper presents an analysis of the theme of sustainability applied to the design and construction of traditional concrete retaining wall and presenting a cost-effective and environmental solution using geosynthetics.Keywords: sustainability, retaining wall, geosynthetics, life cycle assessment
Procedia PDF Downloads 20605684 Methylene Blue Removal Using NiO nanoparticles-Sand Adsorption Packed Bed
Authors: Nedal N. Marei, Nashaat Nassar
Abstract:
Many treatment techniques have been used to remove the soluble pollutants from wastewater as; dyes and metal ions which could be found in rich amount in the used water of the textile and tanneries industry. The effluents from these industries are complex, containing a wide variety of dyes and other contaminants, such as dispersants, acids, bases, salts, detergents, humectants, oxidants, and others. These techniques can be divided into physical, chemical, and biological methods. Adsorption has been developed as an efficient method for the removal of heavy metals from contaminated water and soil. It is now recognized as an effective method for the removal of both organic and inorganic pollutants from wastewaters. Nanosize materials are new functional materials, which offer high surface area and have come up as effective adsorbents. Nano alumina is one of the most important ceramic materials widely used as an electrical insulator, presenting exceptionally high resistance to chemical agents, as well as giving excellent performance as a catalyst for many chemical reactions, in microelectronic, membrane applications, and water and wastewater treatment. In this study, methylene blue (MB) dye has been used as model dye of textile wastewater in order to synthesize a synthetic MB wastewater. NiO nanoparticles were added in small percentage in the sand packed bed adsorption columns to remove the MB from the synthetic textile wastewater. Moreover, different parameters have been evaluated; flow of the synthetic wastewater, pH, height of the bed, percentage of the NiO to the sand in the packed material. Different mathematical models where employed to find the proper model which describe the experimental data and help to analyze the mechanism of the MB adsorption. This study will provide good understanding of the dyes adsorption using metal oxide nanoparticles in the classical sand bed.Keywords: adsorption, column, nanoparticles, methylene
Procedia PDF Downloads 2695683 Analyzing Use of Figurativeness, Visual Elements, Allegory, Scenic Imagery as Support System in Punjabi Contemporary Theatre for Escaping Censorship
Authors: Shazia Anwer
Abstract:
This paper has discussed the unusual form of resistance in theatre against censorship board in Pakistan. The atypical approach of dramaturgy created massive space for performers and audiences to integrate and communicate. The social and religious absolutes creates suffocation in Pakistani society, strict control over all Fine and Performing Art has made art political, contemporary dramatics has started an amalgamated theatre to avoid censorship. Contemporary Punjabi theatre techniques are directly dependent on human cognition. The idea of indirect thought processing is not unique but dependent on spectators. The paper has provided an account of these techniques and their specific use for conveying specific messages across the audiences. For the Dramaturge of today, theatre space is an expression representing a linguistic formulation that includes qualities of experimental and non-traditional use of classical theatrical space in the context of fulfilling the concept of open theatre. Paper has explained the transformation of the theatrical experience into an event where the actor and the audience are co-existing and co-experiencing the dramatical experience. The denial of the existence of the 4th -Wall made two-way communication possible. This paper has elaborated that the previously marginalized genres such as naach, jugat, miras, are extensively included to counter the censorship board. Figurativeness, visual elements, allegory, scenic imagery are basic support system for contemporary Punjabi theatre. The body of the actor is used as a source for non-verbal communication, and for an escape from traditional theatrical space which by every means has every element that could be controlled and reprimanded by the controlling authority.Keywords: communication, Punjabi theatre, figurativeness, censorship
Procedia PDF Downloads 1345682 Cutting Plane Methods for Integer Programming: NAZ Cut and Its Variations
Authors: A. Bari
Abstract:
Integer programming is a branch of mathematical programming techniques in operations research in which some or all of the variables are required to be integer valued. Various cuts have been used to solve these problems. We have also developed cuts known as NAZ cut & A-T cut to solve the integer programming problems. These cuts are used to reduce the feasible region and then reaching the optimal solution in minimum number of steps.Keywords: Integer Programming, NAZ cut, A-T cut, Cutting plane method
Procedia PDF Downloads 3645681 Improvement of the Robust Proportional–Integral–Derivative (PID) Controller Parameters for Controlling the Frequency in the Intelligent Multi-Zone System at the Present of Wind Generation Using the Seeker Optimization Algorithm
Authors: Roya Ahmadi Ahangar, Hamid Madadyari
Abstract:
The seeker optimization algorithm (SOA) is increasingly gaining popularity among the researchers society due to its effectiveness in solving some real-world optimization problems. This paper provides the load-frequency control method based on the SOA for removing oscillations in the power system. A three-zone power system includes a thermal zone, a hydraulic zone and a wind zone equipped with robust proportional-integral-differential (PID) controllers. The result of simulation indicates that load-frequency changes in the wind zone for the multi-zone system are damped in a short period of time. Meanwhile, in the oscillation period, the oscillations amplitude is not significant. The result of simulation emphasizes that the PID controller designed using the seeker optimization algorithm has a robust function and a better performance for oscillations damping compared to the traditional PID controller. The proposed controller’s performance has been compared to the performance of PID controller regulated with Particle Swarm Optimization (PSO) and. Genetic Algorithm (GA) and Artificial Bee Colony (ABC) algorithms in order to show the superior capability of the proposed SOA in regulating the PID controller. The simulation results emphasize the better performance of the optimized PID controller based on SOA compared to the PID controller optimized with PSO, GA and ABC algorithms.Keywords: load-frequency control, multi zone, robust PID controller, wind generation
Procedia PDF Downloads 3035680 TessPy – Spatial Tessellation Made Easy
Authors: Jonas Hamann, Siavash Saki, Tobias Hagen
Abstract:
Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies
Procedia PDF Downloads 1285679 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 2795678 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 375677 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations
Authors: Maria Inês Pinho
Abstract:
Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management
Procedia PDF Downloads 182