Search results for: evolution algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5280

Search results for: evolution algorithm

1350 Rationalized Haar Transforms Approach to Design of Observer for Control Systems with Unknown Inputs

Authors: Joon-Hoon Park

Abstract:

The fundamental concept of observability is important in both theoretical and practical points of modern control systems. In modern control theory, a control system has criteria for determining the design solution exists for the system parameters and design objectives. The idea of observability relates to the condition of observing or estimating the state variables from the output variables that is generally measurable. To design closed-loop control system, the practical problems of implementing the feedback of the state variables must be considered and implementing state feedback control problem has been existed in this case. All the state variables are not available, so it is requisite to design and implement an observer that will estimate the state variables form the output parameters. However sometimes unknown inputs are presented in control systems as practical cases. This paper presents a design method and algorithm for observer of control system with unknown input parameters based on Rationalized Haar transform. The proposed method is more advantageous than the other numerical method.

Keywords: orthogonal functions, rationalized Haar transforms, control system observer, algebraic method

Procedia PDF Downloads 355
1349 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 75
1348 Agricultural Organized Areas Approach for Resilience to Droughts, Nutrient Cycle and Rural and Wild Fires

Authors: Diogo Pereira, Maria Moura, Joana Campos, João Nunes

Abstract:

As the Ukraine war highlights the European Economic Area’s vulnerability and external dependence on feed and food, agriculture gains significant importance. Transformative change is necessary to reach a sustainable and resilient agricultural sector. Agriculture is an important drive for bioeconomy and the equilibrium and survival of society and rural fires resilience. The pressure of (1) water stress, (2) nutrient cycle, and (3) social demographic evolution towards 70% of the population in Urban systems and the aging of the rural population, combined with climate change, exacerbates the problem and paradigm of rural and wildfires, especially in Portugal. The Portuguese territory is characterized by (1) 28% of marginal land, (2) the soil quality of 70% of the territory not being appropriate for agricultural activity, (3) a micro smallholding, with less than 1 ha per proprietor, with mainly familiar and traditional agriculture in the North and Centre regions, and (4) having the most vulnerable areas for rural fires in these same regions. The most important difference between the South, North and Centre of Portugal, referring to rural and wildfires, is the agricultural activity, which has a higher level in the South. In Portugal, rural and wildfires represent an average annual economic loss of around 800 to 1000 million euros. The WinBio model is an agrienvironmental metabolism design, with the capacity to create a new agri-food metabolism through Agricultural Organized Areas, a privatepublic partnership. This partnership seeks to grow agricultural activity in regions with (1) abandoned territory, (2) micro smallholding, (3) water and nutrient management necessities, and (4) low agri-food literacy. It also aims to support planning and monitoring of resource use efficiency and sustainability of territories, using agriculture as a barrier for rural and wildfires in order to protect rural population.

Keywords: agricultural organized areas, residues, climate change, drought, nutrients, rural and wild fires

Procedia PDF Downloads 61
1347 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 296
1346 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method

Authors: Omer Oral, Y. Emre Yilmaz

Abstract:

Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.

Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization

Procedia PDF Downloads 117
1345 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 250
1344 Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography

Authors: Jorge Martinez-Garcia, Ingrid Stelzner, Joerg Stelzner, Damian Gwerder, Philipp Schuetz

Abstract:

Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator’s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species.

Keywords: ring recognition, edge detection, X-ray computed tomography, dendrochronology

Procedia PDF Downloads 204
1343 Attendance Management System Implementation Using Face Recognition

Authors: Zainab S. Abdullahi, Zakariyya H. Abdullahi, Sahnun Dahiru

Abstract:

Student attendance in schools is a very important aspect in school management record. In recent years, security systems have become one of the most demanding systems in school. Every institute have its own method of taking attendance, many schools in Nigeria use the old fashion way of taking attendance. That is writing the students name and registration number in a paper and submitting it to the lecturer at the end of the lecture which is time-consuming and insecure, because some students can write for their friends without the lecturer’s knowledge. In this paper, we propose a system that takes attendance using face recognition. There are many automatic methods available for this purpose i.e. biometric attendance, but they all waste time, because the students have to follow a queue to put their thumbs on a scanner which is time-consuming. This attendance is recorded by using a camera attached in front of the class room and capturing the student images, detect the faces in the image and compare the detected faces with database and mark the attendance. The principle component analysis was used to recognize the faces detected with a high accuracy rate. The paper reviews the related work in the field of attendance system, then describe the system architecture, software algorithm and result.

Keywords: attendance system, face detection, face recognition, PCA

Procedia PDF Downloads 345
1342 A Review of Hypnosis Uses for Anxiety and Phobias Treatment

Authors: Fleura Shkëmbi, Sevim Mustafa, Naim Fanaj

Abstract:

Hypnosis, often known as cognitive therapy, is a sort of mind-body psychotherapy. A professional and certified hypnotist or hypnotherapist guides the patient into this extreme level of focus and relaxation during the session by utilizing verbal cues, repetition, and imagery. In recent years, hypnotherapy has gained popularity in the treatment of a variety of disorders, including anxiety and particular phobias. The term "phobia" is commonly used to define fear of a certain trigger. When faced with potentially hazardous situations, the brain naturally experiences dread. While a little dread here and there may keep us safe, phobias can drastically reduce our quality of life. In summary, persons who suffer from anxiety are considered to see particular environmental situations as dangerous, but those who do not suffer from anxiety do not. Hypnosis is essential in the treatment of anxiety disorders. Hypnosis can help patients minimize their anxiety symptoms. This broad concept has aided in the development of models and therapies for anxiety disorders such as generalized anxiety disorder, panic attacks, hypochondria, and obsessional disorders. Hypnosis techniques are supposed to be attentive and mental pictures, which is conceivable; this is why they're associated with improved working memory and visuospatial abilities. In this sense, the purpose of this study is to determine how effectively specific therapeutic methods perform in treating persons with anxiety and phobias. In addition to cognitive-behavioral therapy and other therapies, the approaches emphasized the use of therapeutic hypnosis. This study looks at the use of hypnosis and related psychotherapy procedures in the treatment of anxiety disorders. Following a discussion of the evolution of hypnosis as a therapeutic tool, neurobiological research is used to demonstrate the influence of hypnosis on the change of perception in the brain. The use of hypnosis in the treatment of phobias, stressful situations, and posttraumatic stress disorder is examined, as well as similarities between the hypnotic state and dissociative reactions to trauma. Through an extensive literature evaluation, this study will introduce hypnotherapy procedures that result in more successful anxiety and phobia treatment.

Keywords: anxiety, hypnosis, hypnotherapy, phobia, technique, state

Procedia PDF Downloads 104
1341 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 379
1340 Evaluation and Fault Classification for Healthcare Robot during Sit-To-Stand Performance through Center of Pressure

Authors: Tianyi Wang, Hieyong Jeong, An Guo, Yuko Ohno

Abstract:

Healthcare robot for assisting sit-to-stand (STS) performance had aroused numerous research interests. To author’s best knowledge, knowledge about how evaluating healthcare robot is still unknown. Robot should be labeled as fault if users feel demanding during STS when they are assisted by robot. In this research, we aim to propose a method to evaluate sit-to-stand assist robot through center of pressure (CoP), then classify different STS performance. Experiments were executed five times with ten healthy subjects under four conditions: two self-performed STSs with chair heights of 62 cm and 43 cm, and two robot-assisted STSs with chair heights of 43 cm and robot end-effect speed of 2 s and 5 s. CoP was measured using a Wii Balance Board (WBB). Bayesian classification was utilized to classify STS performance. The results showed that faults occurred when decreased the chair height and slowed robot assist speed. Proposed method for fault classification showed high probability of classifying fault classes form others. It was concluded that faults for STS assist robot could be detected by inspecting center of pressure and be classified through proposed classification algorithm.

Keywords: center of pressure, fault classification, healthcare robot, sit-to-stand movement

Procedia PDF Downloads 187
1339 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 464
1338 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 183
1337 Vibration-Based Data-Driven Model for Road Health Monitoring

Authors: Guru Prakash, Revanth Dugalam

Abstract:

A road’s condition often deteriorates due to harsh loading such as overload due to trucks, and severe environmental conditions such as heavy rain, snow load, and cyclic loading. In absence of proper maintenance planning, this results in potholes, wide cracks, bumps, and increased roughness of roads. In this paper, a data-driven model will be developed to detect these damages using vibration and image signals. The key idea of the proposed methodology is that the road anomaly manifests in these signals, which can be detected by training a machine learning algorithm. The use of various machine learning techniques such as the support vector machine and Radom Forest method will be investigated. The proposed model will first be trained and tested with artificially simulated data, and the model architecture will be finalized by comparing the accuracies of various models. Once a model is fixed, the field study will be performed, and data will be collected. The field data will be used to validate the proposed model and to predict the future road’s health condition. The proposed will help to automate the road condition monitoring process, repair cost estimation, and maintenance planning process.

Keywords: SVM, data-driven, road health monitoring, pot-hole

Procedia PDF Downloads 70
1336 Tectonic Complexity: Out-of-Sequence Thrusting in the Higher Himalaya of Jhakri-Sarahan region, Himachal Pradesh, India

Authors: Rajkumar Ghosh

Abstract:

The study focuses on the tectonics of out-of-sequence thrusting (OOST) in the NW region of the Himalaya, particularly in Himachal Pradesh. The research aims to identify the features and nature of OOST in the field and the associated rock types and lithological boundaries in the field of NW Himalaya, Himachal Pradesh, India. The research employs fieldwork and micro-structure observations, correlations, and analyses to identify and analyze the OOST features and associated rock types. The study reveals the presence of three OOSTs, namely Jhakri Thrust (JT), Sarahan Thrust (ST), and Chaura Thrust (CT), which consist of several branches, some of which are still active. The thrust system exhibits varying internal geometry, including box folds, boudins, scar folds, crenulation cleavages, kink folds, and tension gashes. The CT, which is concealed beneath Jutogh Thrust sheet, represents a steepened downward thrust, while the JT has a western dip and is south-westward verging. The research provides crucial information on the tectonics of OOST in the NW region of the Himalaya, particularly in Himachal Pradesh, which is crucial in understanding the regional geological evolution and associated hazards. The data were collected through fieldwork and micro-structure observations, correlations, and analyses of rock samples. The data were analyzed using tectonic and geochronological techniques to identify the nature and characteristics of OOST. The research addressed the question of identifying Higher Himalayan OOST in the field of NW Himalaya, Himachal Pradesh, India, and the associated rock types and lithological boundaries. The study concludes that there is minimal documentation and a lack of suitable exposure of rocks to generalize the features of OOST in the field in NW Higher Himalaya, Himachal Pradesh. The study recommends more extensive mapping and fieldwork to improve understanding of OOST in the region.

Keywords: out-of-sequence thrust (OOST), main central thrust (MCT), jhakri thrust (JT), sarahan thrust (ST), chaura thrust (CT), higher himalaya (HH)

Procedia PDF Downloads 75
1335 Design of Geochemical Maps of Industrial City Using Gradient Boosting and Geographic Information System

Authors: Ruslan Safarov, Zhanat Shomanova, Yuri Nossenko, Zhandos Mussayev, Ayana Baltabek

Abstract:

Geochemical maps of distribution of polluting elements V, Cr, Mn, Co, Ni, Cu, Zn, Mo, Cd, Pb on the territory of the Pavlodar city (Kazakhstan), which is an industrial hub were designed. The samples of soil were taken from 100 locations. Elemental analysis has been performed using XRF. The obtained data was used for training of the computational model with gradient boosting algorithm. The optimal parameters of model as well as the loss function were selected. The computational model was used for prediction of polluting elements concentration for 1000 evenly distributed points. Based on predicted data geochemical maps were created. Additionally, the total pollution index Zc was calculated for every from 1000 point. The spatial distribution of the Zc index was visualized using GIS (QGIS). It was calculated that the maximum coverage area of the territory of the Pavlodar city belongs to the moderately hazardous category (89.7%). The visualization of the obtained data allowed us to conclude that the main source of contamination goes from the industrial zones where the strategic metallurgical and refining plants are placed.

Keywords: Pavlodar, geochemical map, gradient boosting, CatBoost, QGIS, spatial distribution, heavy metals

Procedia PDF Downloads 64
1334 The Cost of Non-Communicable Diseases in the European Union: A Projection towards the Future

Authors: Desiree Vandenberghe, Johan Albrecht

Abstract:

Non-communicable diseases (NCDs) are responsible for the vast majority of deaths in the European Union (EU) and represent a large share of total health care spending. A future increase in this health and financial burden is likely to be driven by population ageing, lifestyle changes and technological advances in medicine. Without adequate prevention measures, this burden can severely threaten population health and economic development. To tackle this challenge, a correct assessment of the current burden of NCDs is required, as well as a projection of potential increases of this burden. The contribution of this paper is to offer perspective on the evolution of the NCD burden towards the future and to give an indication of the potential of prevention policy. A Non-Homogenous, Semi-Markov model for the EU was constructed, which allowed for a projection of the cost burden for the four main NCDs (cancer, cardiovascular disease, chronic respiratory disease and diabetes mellitus) towards 2030 and 2050. This simulation is done based on multiple baseline scenarios that vary in demand and supply factors such as health status, population structure, and technological advances. Finally, in order to assess the potential of preventive measures to curb the cost explosion of NCDs, a simulation is executed which includes increased efforts for preventive health care measures. According to the Markov model, by 2030 and 2050, total costs (direct and indirect costs) in the EU could increase by 30.1% and 44.1% respectively, compared to 2015 levels. An ambitious prevention policy framework for NCDs will be required if the EU wants to meet this challenge of rising costs. To conclude, significant cost increases due to Non-Communicable Diseases are likely to occur due to demographic and lifestyle changes. Nevertheless, an ambitious prevention program throughout the EU can aid in making this cost burden manageable for future generations.

Keywords: non-communicable diseases, preventive health care, health policy, Markov model, scenario analysis

Procedia PDF Downloads 124
1333 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application

Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid

Abstract:

Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.

Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization

Procedia PDF Downloads 385
1332 Teaching Audiovisual Translation (AVT):Linguistic and Technical Aspects of Different Modes of AVT

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Teachers constantly need to innovate and redefine materials for their lectures, especially in areas such as Language for Specific Purposes (LSP) and Translation Studies (TS). It is therefore essential for the lecturers to be technically skilled to handle the never-ending evolution in software and technology, which are necessary elements especially in certain courses at university level. This need becomes even more evident in Audiovisual Translation (AVT) Modules and Courses. AVT has undergone considerable growth in the area of teaching and learning of languages for academic purposes. We have witnessed the development of a considerable number of masters and postgraduate courses where AVT becomes a tool for L2 learning. The teaching and learning of different AVT modes are components of undergraduate and postgraduate courses. Universities, in which AVT is offered as part of their teaching programme or training, make use of professional or free software programs. This paper presents an approach in AVT withina specific university context, in which technology is used by means of professional and nonprofessional software. Students take an AVT subject as part of their English Linguistics Master’s Degree at the Complutense University (UCM) in which they are using professional (Spot) and nonprofessional (Subtitle Workshop, Aegisub, Windows Movie Maker) software packages. The students are encouraged to develop their tasks and projects simulating authentic professional experiences and contexts in the different AVT modes: subtitling for hearing and deaf and hard of hearing population, audio description and dubbing. Selected scenes from TV series such as X-Files, Gossip girl, IT Crowd; extracts from movies: Finding Nemo, Good Will Hunting, School of Rock, Harry Potter, Up; and short movies (Vincent) were used. Hence, the complexity of the audiovisual materials used in class as well as the activities for their projects were graded. The assessment of the diverse tasks carried out by all the students are expected to provide some insights into the best way to improve their linguistic accuracy and oral and written productions with the use of different AVT modes in a very specific ESP university context.

Keywords: ESP, audiovisual translation, technology, university teaching, teaching

Procedia PDF Downloads 505
1331 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 64
1330 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia PDF Downloads 510
1329 Geopolymerization Methods for Clay Soils Treatment

Authors: Baba Hassane Ahmed Hisseini, Abdelkrim Bennabi, Rabah Hamzaoui, Lamis Makki, Gaetan Blanck

Abstract:

Most of the clay soils are known as problematic soils due to their water content, which varies greatly over time. It is observed that they are used to be subject to shrinkage and swelling, thus causing a problem of stability on the structures of civil engineering construction work. They are often excavated and placed in a storage area giving rise to the opening of new quarries. This method has become obsolete today because to protect the environment, we are leading to think differently and opening the way to new research for the improvement of the performance of this type of clay soils to reuse them in the construction field. The solidification and stabilization technique is used to improve the properties of poor quality soils to transform them into materials with a suitable performance for a new use in the civil engineering field rather than to excavate them and store them in the discharge area. In our case, the polymerization method is used for bad clay soils classified as high plasticity soil class A4 according to the French standard NF P11-300, where classical treatment methods with cement or lime are not efficient. Our work concerns clay soil treatment study using raw materials as additives for solidification and stabilization. The geopolymers are synthesized by aluminosilicates materials like fly ash, metakaolin, or blast furnace slag and activated by alkaline solution based on sodium hydroxide (NaOH), sodium silicate (Na2SiO3) or a mixture of both of them. In this study, we present the mechanical properties of the soil clay (A4 type) evolution with geopolymerisation methods treatment. Various mix design of aluminosilicates materials and alkaline solutions were carried at different percentages and different curing times of 1, 7, and 28 days. The compressive strength of the untreated clayey soil could be increased from simple to triple. It is observed that the improvement of compressive strength is associated with a geopolymerization mechanism. The highest compressive strength was found with metakaolin at 28 days.

Keywords: treatment and valorization of clay-soil, solidification and stabilization, alkali-activation of co-product, geopolymerization

Procedia PDF Downloads 147
1328 Analysis of Three-Dimensional Longitudinal Rolls Induced by Double Diffusive Poiseuille-Rayleigh-Benard Flows in Rectangular Channels

Authors: O. Rahli, N. Mimouni, R. Bennacer, K. Bouhadef

Abstract:

This numerical study investigates the travelling wave’s appearance and the behavior of Poiseuille-Rayleigh-Benard (PRB) flow induced in 3D thermosolutale mixed convection (TSMC) in horizontal rectangular channels. The governing equations are discretized by using a control volume method with third order Quick scheme in approximating the advection terms. Simpler algorithm is used to handle coupling between the momentum and continuity equations. To avoid the excessively high computer time, full approximation storage (FAS) with full multigrid (FMG) method is used to solve the problem. For a broad range of dimensionless controlling parameters, the contribution of this work is to analyzing the flow regimes of the steady longitudinal thermoconvective rolls (noted R//) for both thermal and mass transfer (TSMC). The transition from the opposed volume forces to cooperating ones, considerably affects the birth and the development of the longitudinal rolls. The heat and mass transfers distribution are also examined.

Keywords: heat and mass transfer, mixed convection, poiseuille-rayleigh-benard flow, rectangular duct

Procedia PDF Downloads 288
1327 A Brave New World of Privacy: Empirical Insights into the Metaverse’s Personalization Dynamics

Authors: Cheng Xu

Abstract:

As the metaverse emerges as a dynamic virtual simulacrum of reality, its implications on user privacy have become a focal point of interest. While previous discussions have ventured into metaverse privacy dynamics, a glaring empirical gap persists, especially concerning the effects of personalization in the context of news recommendation services. This study stands at the forefront of addressing this void, meticulously examining how users' privacy concerns shift within the metaverse's personalization context. Through a pre-registered randomized controlled experiment, participants engaged in a personalization task across both the metaverse and traditional online platforms. Upon completion of this task, a comprehensive news recommendation service provider offers personalized news recommendations to the users. Our empirical findings reveal that the metaverse inherently amplifies privacy concerns compared to traditional settings. However, these concerns are notably mitigated when users have a say in shaping the algorithms that drive these recommendations. This pioneering research not only fills a significant knowledge gap but also offers crucial insights for metaverse developers and policymakers, emphasizing the nuanced role of user input in shaping algorithm-driven privacy perceptions.

Keywords: metaverse, privacy concerns, personalization, digital interaction, algorithmic recommendations

Procedia PDF Downloads 102
1326 A Particle Swarm Optimal Control Method for DC Motor by Considering Energy Consumption

Authors: Yingjie Zhang, Ming Li, Ying Zhang, Jing Zhang, Zuolei Hu

Abstract:

In the actual start-up process of DC motors, the DC drive system often faces a conflict between energy consumption and acceleration performance. To resolve the conflict, this paper proposes a comprehensive performance index that energy consumption index is added on the basis of classical control performance index in the DC motor starting process. Taking the comprehensive performance index as the cost function, particle swarm optimization algorithm is designed to optimize the comprehensive performance. Then it conducts simulations on the optimization of the comprehensive performance of the DC motor on condition that the weight coefficient of the energy consumption index should be properly designed. The simulation results show that as the weight of energy consumption increased, the energy efficiency was significantly improved at the expense of a slight sacrifice of fastness indicators with the comprehensive performance index method. The energy efficiency was increased from 63.18% to 68.48% and the response time reduced from 0.2875s to 0.1736s simultaneously compared with traditional proportion integrals differential controller in energy saving.

Keywords: comprehensive performance index, energy consumption, acceleration performance, particle swarm optimal control

Procedia PDF Downloads 138
1325 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study

Authors: Si Mon Kueh, Tom J. Kazmierski

Abstract:

There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.

Keywords: Artificial Neural Networks (ANN), bit-serial neural processor, FPGA, Neural Processing Element (NPE)

Procedia PDF Downloads 302
1324 The Moderating Role of Perceived University Environment in the Formation of Entrepreneurial Intention among Creative Industries Students

Authors: Patrick Ebong Ebewo

Abstract:

The trend of high unemployment levels globally is a growing concern, which suggests that university students especially those studying the creative industries are most likely to face unemployment upon completion of their studies. Therefore the effort of university in fostering entrepreneurial knowledge is equally important to the development of student’s soft skill. The purpose of this paper is to assess the significance of perceived university environment and perceived educational support that influencing University students’ intention in starting their own business in the future. Thus, attempting to answer the question 'How does perceived university environment affect students’ attitude towards entrepreneurship as a career option, perceived entrepreneurial abilities, subjective norm and entrepreneurial intentions?' The study is based on the Theory of Planned Behaviour model adapted from previous studies and empirically tested on graduates at the Tshwane University of Technology. A sample of 150 graduates from the Arts and Design graduates took part in the study and data collected were analysed using structural equation modelling (SEM). Our findings seem to suggest the indirect impact of perceived university environment on entrepreneurial intention through perceived environment support and perceived entrepreneurial abilities. Thus, any increase in perceived university environment might influence students to become entrepreneurs. Based on these results, it is recommended that: (a) Tshwane University of Technology and other universities of technology should establish an ‘Entrepreneurship Internship Programme’ as a tool for stimulated work integrated learning. Post-graduation intervention could be implemented by the development of a ‘Graduate Entrepreneurship Program’ which should be embedded in the Bachelor of Technology (B-Tech now Advance Diploma) and Postgraduate courses; (b) Policymakers should consider the development of a coherent national policy framework that addresses entrepreneurship for the Arts/creative industries sector. This would create the enabling environment for the evolution of Higher Education Institutions from merely Teaching, Learning & Research to becoming drivers for creative entrepreneurship.

Keywords: business venture, entrepreneurship education, entrepreneurial intent, university environment

Procedia PDF Downloads 321
1323 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 187
1322 An Experience on Urban Regeneration: A Case Study of Isfahan, Iran

Authors: Sedigheh Kalantari, Yaping Huang

Abstract:

The historic area of cities has experienced different phases of transformation. The beginning of the twentieth century, modernism, and modern development changed the integrated pattern of change and the historic urban quarter were regarded as subject comprehensive redevelopment. In this respect, historic area of Iranian cities have not been safe from these changes and affected by widespread evolutions; in particular after Islamic Revolution eras (1978) cities have traveled through an evolution in conservation and development policies and practices. Moreover, moving toward a specific approach and specific attention paid to the regeneration of the historical urban centers in Iran has started since the 1990s. This reveals the great importance attached to the historical centers of cities. This paper is an approach to examine an experience on urban regeneration in Iran through a case study. The study relies on multiple source of evidence. The use of multiple sources of evidence can help substantially improve the validity and reliability of the research. The empirical core of this research, therefore, rests in the process of urban revitalization of the old square in Isfahan. Isfahan is one of the oldest city of Persia. The historic area of city encompasses a large number of valuable buildings and monuments. One of the cultural and historical region of Isfahan is Atiq Square (Old Square). It has been the backbone node of the city that in course of time has being ignored more and more and transformed negatively. The complex had suffered from insufficiencies especially with respect to social and spatial aspects. Therefore, reorganization of that complex as the main and most important urban center of Isfahan became an inevitable issue; So this paper except from reminding the value of such historic-cultural heritage and review of its transformation, focused on an experience of urban revitalization project in this heritage site. The outcome of this research shows that situated in different socio-economic political and historical contexts and in face of different urban regeneration issues, Iran have displayed significant differences in the way of urban regeneration.

Keywords: historic area, Iran, urban regeneration, revitalization

Procedia PDF Downloads 232
1321 Non-Invasive Imaging of Tissue Using Near Infrared Radiations

Authors: Ashwani Kumar Aggarwal

Abstract:

NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.

Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering

Procedia PDF Downloads 302