Search results for: sediment transport models
6880 Heat Transfer Enhancement by Turbulent Impinging Jet with Jet's Velocity Field Excitations Using OpenFOAM
Authors: Naseem Uddin
Abstract:
Impinging jets are used in variety of engineering and industrial applications. This paper is based on numerical simulations of heat transfer by turbulent impinging jet with velocity field excitations using different Reynolds Averaged Navier-Stokes Equations models. Also Detached Eddy Simulations are conducted to investigate the differences in the prediction capabilities of these two simulation approaches. In this paper the excited jet is simulated in non-commercial CFD code OpenFOAM with the goal to understand the influence of dynamics of impinging jet on heat transfer. The jet’s frequencies are altered keeping in view the preferred mode of the jet. The Reynolds number based on mean velocity and diameter is 23,000 and jet’s outlet-to-target wall distance is 2. It is found that heat transfer at the target wall can be influenced by judicious selection of amplitude and frequencies.Keywords: excitation, impinging jet, natural frequency, turbulence models
Procedia PDF Downloads 2716879 A Comparison of YOLO Family for Apple Detection and Counting in Orchards
Authors: Yuanqing Li, Changyi Lei, Zhaopeng Xue, Zhuo Zheng, Yanbo Long
Abstract:
In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture.Keywords: agricultural object detection, deep learning, machine vision, YOLO family
Procedia PDF Downloads 1956878 Measuring Housing Quality Using Geographic Information System (GIS)
Authors: Silvija ŠIljeg, Ante ŠIljeg, Ivan Marić
Abstract:
Measuring housing quality is being done on objective and subjective level using different indicators. During the research 5 urban and housing indicators formed according to 58 variables from different housing, domains were used. The aims of the research were to measure housing quality based on GIS approach and to detect critical points of housing in the example of Croatian coastal Town Zadar. The purposes of GIS in the research are to generate models of housing quality indexes by standardisation and aggregation of variables and to examine accuracy model of housing quality index. Analysis of accuracy has been done on the example of variable referring to educational objects availability. By defining weighted coefficients and using different GIS methods high, middle and low housing quality zones were determined. Obtained results can be of use to town planners, spatial planners and town authorities in the process of generating decisions, guidelines, and spatial interventions.Keywords: housing quality, GIS, housing quality index, indicators, models of housing quality
Procedia PDF Downloads 2966877 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 846876 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 1636875 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1296874 Emerging Technologies in Distance Education
Authors: Eunice H. Li
Abstract:
This paper discusses and analyses a small portion of the literature that has been reviewed for research work in Distance Education (DE) pedagogies that I am currently undertaking. It begins by presenting a brief overview of Taylor's (2001) five-generation models of Distance Education. The focus of the discussion will be on the 5th generation, Intelligent Flexible Learning Model. For this generation, educational and other institutions make portal access and interactive multi-media (IMM) an integral part of their operations. The paper then takes a brief look at current trends in technologies – for example smart-watch wearable technology such as Apple Watch. The emergent trends in technologies carry many new features. These are compared to former DE generational features. Also compared is the time span that has elapsed between the generations that are referred to in Taylor's model. This paper is a work in progress. The paper therefore welcome new insights, comparisons and critique of the issues discussed.Keywords: distance education, e-learning technologies, pedagogy, generational models
Procedia PDF Downloads 4616873 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 646872 Scalable Learning of Tree-Based Models on Sparsely Representable Data
Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou
Abstract:
Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.Keywords: big data, sparsely representable data, tree-based models, scalable learning
Procedia PDF Downloads 2616871 Numerical Simulation and Experimental Validation of the Tire-Road Separation in Quarter-car Model
Authors: Quy Dang Nguyen, Reza Nakhaie Jazar
Abstract:
The paper investigates vibration dynamics of tire-road separation for a quarter-car model; this separation model is developed to be close to the real situation considering the tire is able to separate from the ground plane. A set of piecewise linear mathematical models is developed and matches the in-contact and no-contact states to be considered as mother models for further investigations. The bound dynamics are numerically simulated in the time response and phase portraits. The separation analysis may determine which values of suspension parameters can delay and avoid the no-contact phenomenon, which results in improving ride comfort and eliminating the potentially dangerous oscillation. Finally, model verification is carried out in the MSC-ADAMS environment.Keywords: quarter-car vibrations, tire-road separation, separation analysis, separation dynamics, ride comfort, ADAMS validation
Procedia PDF Downloads 896870 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 4846869 Engineering Seismological Studies in and around Zagazig City, Sharkia, Egypt
Authors: M. El-Eraki, A. A. Mohamed, A. A. El-Kenawy, M. S. Toni, S. I. Mustafa
Abstract:
The aim of this paper is to study the ground vibrations using Nakamura technique to evaluate the relation between the ground conditions and the earthquake characteristics. Microtremor measurements were carried out at 55 sites in and around Zagazig city. The signals were processed using horizontal to vertical spectral ratio (HVSR) technique to estimate the fundamental frequencies of the soil deposits and its corresponding H/V amplitude. Seismic measurements were acquired at nine sites for recording the surface waves. The recorded waveforms were processed using the multi-channel analysis of surface waves (MASW) method to infer the shear wave velocity profile. The obtained fundamental frequencies were found to be ranging from 0.7 to 1.7 Hz and the maximum H/V amplitude reached 6.4. These results together with the average shear wave velocity in the surface layers were used for the estimation of the thickness of the upper most soft cover layers (depth to bedrock). The sediment thickness generally increases at the northeastern and southwestern parts of the area, which is in good agreement with the local geological structure. The results of this work showed the zones of higher potential damage in the event of an earthquake in the study area.Keywords: ambient vibrations, fundamental frequency, surface waves, zagazig
Procedia PDF Downloads 2826868 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models
Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo
Abstract:
Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps
Procedia PDF Downloads 966867 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 2146866 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam
Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard
Abstract:
Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers
Procedia PDF Downloads 1096865 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 5026864 Monte Carlo Simulation of X-Ray Spectra in Diagnostic Radiology and Mammography Using MCNP4C
Authors: Sahar Heidary, Ramin Ghasemi Shayan
Abstract:
The overall goal Monte Carlo N-atom radioactivity transference PC program (MCNP4C) was done for the regeneration of x-ray groups in diagnostic radiology and mammography. The electrons were transported till they slow down and stopover in the target. Both bremsstrahlung and characteristic x-ray creation were measured in this study. In this issue, the x-ray spectra forecast by several computational models recycled in the diagnostic radiology and mammography energy kind have been calculated by appraisal with dignified spectra and their outcome on the scheming of absorbed dose and effective dose (ED) told to the adult ORNL hermaphroditic phantom quantified. This comprises practical models (TASMIP and MASMIP), semi-practical models (X-rayb&m, X-raytbc, XCOMP, IPEM, Tucker et al., and Blough et al.), and Monte Carlo modeling (EGS4, ITS3.0, and MCNP4C). Images got consuming synchrotron radiation (SR) and both screen-film and the CR system were related with images of the similar trials attained with digital mammography equipment. In sight of the worthy feature of the effects gained, the CR system was used in two mammographic inspections with SR. For separately mammography unit, the capability acquiesced bilateral mediolateral oblique (MLO) and craniocaudal(CC) mammograms attained in a woman with fatty breasts and a woman with dense breasts. Referees planned the common groups and definite absences that managed to a choice to miscarry the part that formed the scientific imaginings.Keywords: mammography, monte carlo, effective dose, radiology
Procedia PDF Downloads 1296863 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors
Authors: Louise Ödlund, Danica Djuric Ilic
Abstract:
Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.Keywords: energy system, district heating, sustainable business strategies, sustainable development
Procedia PDF Downloads 1696862 Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage
Authors: L. Ramirez, E. Guillén, J. Sánchez
Abstract:
Some of the main goals about telecare such as monitoring, treatment, telediagnostic are deployed with the integration of applications with specific appliances. In order to achieve a coherent model to integrate software, hardware, and healthcare systems, different telehealth models with Internet of Things (IoT), cloud computing, artificial intelligence, etc. have been implemented, and their advantages are still under analysis. In this paper, we propose an integrated model based on IoT architecture and cloud computing telehealth center. Analytics module is presented as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. The main advantage of this model is the availability of controlling the security and privacy about patient information and the optimization on processing and acquiring clinical parameters according to technical characteristics.Keywords: analytics, telemedicine, internet of things, cloud computing
Procedia PDF Downloads 3246861 A Method to Enhance the Accuracy of Digital Forensic in the Absence of Sufficient Evidence in Saudi Arabia
Authors: Fahad Alanazi, Andrew Jones
Abstract:
Digital forensics seeks to achieve the successful investigation of digital crimes through obtaining acceptable evidence from digital devices that can be presented in a court of law. Thus, the digital forensics investigation is normally performed through a number of phases in order to achieve the required level of accuracy in the investigation processes. Since 1984 there have been a number of models and frameworks developed to support the digital investigation processes. In this paper, we review a number of the investigation processes that have been produced throughout the years and introduce a proposed digital forensic model which is based on the scope of the Saudi Arabia investigation process. The proposed model has been integrated with existing models for the investigation processes and produced a new phase to deal with a situation where there is initially insufficient evidence.Keywords: digital forensics, process, metadata, Traceback, Sauid Arabia
Procedia PDF Downloads 3586860 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks
Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf
Abstract:
Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks
Procedia PDF Downloads 1676859 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis
Authors: Nayak Amar, Turner Naomi, Gobina Edward
Abstract:
The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting
Procedia PDF Downloads 986858 Beyond the Effect on Children: Investigation on the Longitudinal Effect of Parental Perfectionism on Child Maltreatment
Authors: Alice Schittek, Isabelle Roskam, Moira Mikolajczak
Abstract:
Background: Perfectionistic strivings (PS) and perfectionistic concerns (PC) are associated with an increase in parental burnout (PB), and PB causally increases violence towards the offspring. Objective: To our best knowledge, no study has ever investigated whether perfectionism (PS and PC) predicts violence towards the offspring and whether PB could explain this link. We hypothesized that an increase in PS and PC would lead to an increase in violence via an increase in PB. Method: 228 participants responded to an online survey, with three measurement occasions spaced two months apart. Results: Contrary to expectations, cross-lagged path models revealed that violence towards the offspring prospectively predicts an increase in PS and PC. Mediation models showed that PB is not a significant mediator. The results of all models did not change when controlling for social desirability. Conclusion: The present study shows that violence towards the offspring increases the risk of PS and PC in parents, which highlights the importance of understanding the effect of child maltreatment on the whole family system and not just on children. Results are discussed in light of the feeling of guilt experienced by parents. Considering the insignificant mediation effect, PB research should slowly shift towards more (quasi) causal designs, allowing to identify which significant correlations translate into causal effects. Implications: Clinicians should focus on preventing child maltreatment as well as treating parental perfectionism. Researchers should unravel the effects of child maltreatment on the family system.Keywords: maltreatment, parental burnout, perfectionistic strivings, perfectionistic concerns, perfectionism, violence
Procedia PDF Downloads 706857 Perfectionism, Self-Compassion, and Emotion Dysregulation: An Exploratory Analysis of Mediation Models in an Eating Disorder Sample
Authors: Sarah Potter, Michele Laliberte
Abstract:
As eating disorders are associated with high levels of chronicity, impairment, and distress, it is paramount to evaluate factors that may improve treatment outcomes in this group. Individuals with eating disorders exhibit elevated levels of perfectionism and emotion dysregulation, as well as reduced self-compassion. These variables are related to eating disorder outcomes, including shape/weight concerns and psychosocial impairment. Thus, these factors may be tenable targets for treatment within eating disorder populations. However, the relative contributions of perfectionism, emotion dysregulation, and self-compassion to the severity of shape/weight concerns and psychosocial impairment remain largely unexplored. In the current study, mediation analyses were conducted to clarify how perfectionism, emotion dysregulation, and self-compassion are linked to shape/weight concerns and psychosocial impairment. The sample was comprised of 85 patients from an outpatient eating disorder clinic. The patients completed self-report measures of perfectionism, self-compassion, emotion dysregulation, eating disorder symptoms, and psychosocial impairment. Specifically, emotion dysregulation was assessed as a mediator in the relationships between (1) perfectionism and shape/weight concerns, (2) self-compassion and shape/weight concerns, (3) perfectionism and psychosocial impairment, and (4) self-compassion and psychosocial impairment. It was postulated that emotion dysregulation would significantly mediate relationships in the former two models. An a priori hypothesis was not constructed in reference to the latter models, as these analyses were preliminary and exploratory in nature. The PROCESS macro for SPSS was utilized to perform these analyses. Emotion dysregulation fully mediated the relationships between perfectionism and eating disorder outcomes. In the link between self-compassion and psychosocial impairment, emotion dysregulation partially mediated this relationship. Finally, emotion dysregulation did not significantly mediate the relationship between self-compassion and shape/weight concerns. The results suggest that emotion dysregulation and self-compassion may be suitable targets to decrease the severity of psychosocial impairment and shape/weight concerns in individuals with eating disorders. Further research is required to determine the stability of these models over time, between diagnostic groups, and in nonclinical samples.Keywords: eating disorders, emotion dysregulation, perfectionism, self-compassion
Procedia PDF Downloads 1426856 The Market Structure Simulation of Heterogenous Firms
Authors: Arunas Burinskas, Manuela Tvaronavičienė
Abstract:
Although the new trade theories, unlike the theories of an industrial organisation, see the structure of the market and competition between enterprises through their heterogeneity according to various parameters, they do not pay any particular attention to the analysis of the market structure and its development. In this article, although we relied mainly on models developed by the scholars of new trade theory, we proposed a different approach. In our simulation model, we model market demand according to normal distribution function, while on the supply side (as it is in the new trade theory models), productivity is modeled with the Pareto distribution function. The results of the simulation show that companies with higher productivity (lower marginal costs) do not pass on all the benefits of such economies to buyers. However, even with higher marginal costs, firms can choose to offer higher value-added goods to stay in the market. In general, the structure of the market is formed quickly enough and depends on the skills available to firms.Keywords: market, structure, simulation, heterogenous firms
Procedia PDF Downloads 1446855 Thermodynamic Modelling of Liquid-Liquid Equilibria (LLE) in the Separation of p-Cresol from the Coal Tar by Solvent Extraction
Authors: D. S. Fardhyanti, Megawati, W. B. Sediawan
Abstract:
Coal tar is a liquid by-product of the process of coal gasification and carbonation. This liquid oil mixture contains various kinds of useful compounds such as aromatic compounds and phenolic compounds. These compounds are widely used as raw material for insecticides, dyes, medicines, perfumes, coloring matters, and many others. This research investigates thermodynamic modelling of liquid-liquid equilibria (LLE) in the separation of phenol from the coal tar by solvent extraction. The equilibria are modeled by ternary components of Wohl, Van Laar, and Three-Suffix Margules models. The values of the parameters involved are obtained by curve-fitting to the experimental data. Based on the comparison between calculated and experimental data, it turns out that among the three models studied, the Three-Suffix Margules seems to be the best to predict the LLE of p-Cresol mixtures for those system.Keywords: coal tar, phenol, Wohl, Van Laar, Three-Suffix Margules
Procedia PDF Downloads 2566854 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 1116853 Presenting a Knowledge Mapping Model According to a Comparative Study on Applied Models and Approaches to Map Organizational Knowledge
Authors: Ahmad Aslizadeh, Farid Ghaderi
Abstract:
Mapping organizational knowledge is an innovative concept and useful instrument of representation, capturing and visualization of implicit and explicit knowledge. There are a diversity of methods, instruments and techniques presented by different researchers following mapping organizational knowledge to reach determined goals. Implicating of these methods, it is necessary to know their exigencies and conditions in which those can be used. Integrating identified methods of knowledge mapping and comparing them would help knowledge managers to select the appropriate methods. This research conducted to presenting a model and framework to map organizational knowledge. At first, knowledge maps, their applications and necessity are introduced because of extracting comparative framework and detection of their structure. At the next step techniques of researchers such as Eppler, Kim, Egbu, Tandukar and Ebner as knowledge mapping models are presented and surveyed. Finally, they compare and a superior model would be introduced.Keywords: knowledge mapping, knowledge management, comparative study, business and management
Procedia PDF Downloads 4016852 MIMIC: A Multi Input Micro-Influencers Classifier
Authors: Simone Leonardi, Luca Ardito
Abstract:
Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media
Procedia PDF Downloads 1836851 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model
Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf
Abstract:
Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV
Procedia PDF Downloads 124