Search results for: edge detection algorithm
1511 Saudi Arabia Border Security Informatics: Challenges of a Harsh Environment
Authors: Syed Ahsan, Saleh Alshomrani, Ishtiaq Rasool, Ali Hassan
Abstract:
In this oral presentation, we will provide an overview of the technical and semantic architecture of a desert border security and critical infrastructure protection security system. Modern border security systems are designed to reduce the dependability and intrusion of human operators. To achieve this, different types of sensors are use along with video surveillance technologies. Application of these technologies in a harsh desert environment of Saudi Arabia poses unique challenges. Environmental and geographical factors including high temperatures, desert storms, temperature variations and remoteness adversely affect the reliability of surveillance systems. To successfully implement a reliable, effective system in a harsh desert environment, the following must be achieved: i) Selection of technology including sensors, video cameras, and communication infrastructure that suit desert environments. ii) Reduced power consumption and efficient usage of equipment to increase the battery life of the equipment. iii) A reliable and robust communication network with efficient usage of bandwidth. Also, to reduce the expert bottleneck, an ontology-based intelligent information systems needs to be developed. Domain knowledge unique and peculiar to Saudi Arabia needs to be formalized to develop an expert system that can detect abnormal activities and any intrusion.Keywords: border security, sensors, abnormal activity detection, ontologies
Procedia PDF Downloads 4811510 Seismotectonics and Seismology the North of Algeria
Authors: Djeddi Mabrouk
Abstract:
The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth
Procedia PDF Downloads 4081509 Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning
Authors: M. Devaki, K. B. Jayanthi
Abstract:
The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%.Keywords: water body, Deep learning, satellite images, convolution neural network
Procedia PDF Downloads 901508 Delving into Market-Driving Behavior: A Conceptual Roadmap to Delineating Its Key Antecedents and Outcomes
Authors: Konstantinos Kottikas, Vlasis Stathakopoulos, Ioannis G. Theodorakis, Efthymia Kottika
Abstract:
Theorists have argued that Market Orientation is comprised of two facets, namely the Market Driven and the Market Driving components. The present theoretical paper centers on the latter, which to date has been notably under-investigated. The term Market Driving (MD) pertains to influencing the structure of the market, or the behavior of market players in a direction that enhances the competitive edge of the firm. Presently, the main objectives of the paper are the specification of key antecedents and outcomes of Market Driving behavior. Market Driving firms behave proactively, by leading their customers and changing the rules of the game rather than by responding passively to them. Leading scholars were the first to conceptually conceive the notion, followed by some qualitative studies and a limited number of quantitative publications. However, recently, academicians noted that research on the topic remains limited, expressing a strong necessity for further insights. Concerning the key antecedents, top management’s Transformational Leadership (i.e. the form of leadership which influences organizational members by aligning their values, goals and aspirations to facilitate value-consistent behaviors) is one of the key drivers of MD behavior. Moreover, scholars have linked the MD concept with Entrepreneurship. Finally, the role that Employee’s Creativity plays in the development of MD behavior has been theoretically exemplified by a stream of literature. With respect to the key outcomes, it has been demonstrated that MD Behavior positively triggers firm Performance, while theorists argue that it empowers the Competitive Advantage of the firm. Likewise, researchers explicate that MD Behavior produces Radical Innovation. In order to test the robustness of the proposed theoretical framework, a combination of qualitative and quantitative methods is proposed. In particular, the conduction of in-depth interviews with distinguished executives and academicians, accompanied with a large scale quantitative survey will be employed, in order to triangulate the empirical findings. Given that it triggers overall firm’s success, the MD concept is of high importance to managers. Managers can become aware that passively reacting to market conditions is no longer sufficient. On the contrary, behaving proactively, leading the market, and shaping its status quo are new innovative approaches that lead to a paramount competitive posture and Innovation outcomes. This study also exemplifies that managers can foster MD Behavior through Transformational Leadership, Entrepreneurship and recruitment of Creative Employees. To date, the majority of the publications on Market Orientation is unilaterally directed towards the responsive (i.e. the Market Driven) component. The present paper further builds on scholars’ exhortations, and investigates the Market Driving facet, ultimately aspiring to conceptually integrate the somehow fragmented scientific findings, in a holistic framework.Keywords: entrepreneurial orientation, market driving behavior, market orientation
Procedia PDF Downloads 3841507 Identity Management in Virtual Worlds Based on Biometrics Watermarking
Authors: S. Bader, N. Essoukri Ben Amara
Abstract:
With the technological development and rise of virtual worlds, these spaces are becoming more and more attractive for cybercriminals, hidden behind avatars and fictitious identities. Since access to these spaces is not restricted or controlled, some impostors take advantage of gaining unauthorized access and practicing cyber criminality. This paper proposes an identity management approach for securing access to virtual worlds. The major purpose of the suggested solution is to install a strong security mechanism to protect virtual identities represented by avatars. Thus, only legitimate users, through their corresponding avatars, are allowed to access the platform resources. Access is controlled by integrating an authentication process based on biometrics. In the request process for registration, a user fingerprint is enrolled and then encrypted into a watermark utilizing a cancelable and non-invertible algorithm for its protection. After a user personalizes their representative character, the biometric mark is embedded into the avatar through a watermarking procedure. The authenticity of the avatar identity is verified when it requests authorization for access. We have evaluated the proposed approach on a dataset of avatars from various virtual worlds, and we have registered promising performance results in terms of authentication accuracy, acceptation and rejection rates.Keywords: identity management, security, biometrics authentication and authorization, avatar, virtual world
Procedia PDF Downloads 2651506 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion
Authors: Hantian Wu, Bo Huang, Yuan Zeng
Abstract:
Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management
Procedia PDF Downloads 1261505 An Image Processing Based Approach for Assessing Wheelchair Cushions
Authors: B. Farahani, R. Fadil, A. Aboonabi, B. Hoffmann, J. Loscheider, K. Tavakolian, S. Arzanpour
Abstract:
Wheelchair users spend long hours in a sitting position, and selecting the right cushion is highly critical in preventing pressure ulcers in that demographic. Pressure mapping systems (PMS) are typically used in clinical settings by therapists to identify the sitting profile and pressure points in the sitting area to select the cushion that fits the best for the users. A PMS is a flexible mat composed of arrays of distributed networks of flexible sensors. The output of the PMS systems is a color-coded image that shows the intensity of the pressure concentration. Therapists use the PMS images to compare different cushions fit for each user. This process is highly subjective and requires good visual memory for the best outcome. This paper aims to develop an image processing technique to analyze the images of PMS and provide an objective measure to assess the cushions based on their pressure distribution mappings. In this paper, we first reviewed the skeletal anatomy of the human sitting area and its relation to the PMS image. This knowledge is then used to identify the important features that must be considered in image processing. We then developed an algorithm based on those features to analyze the images and rank them according to their fit to the users' needs.Keywords: dynamic cushion, image processing, pressure mapping system, wheelchair
Procedia PDF Downloads 1701504 Study of Effects of 3D Semi-Spheriacl Basin-Shape-Ratio on the Frequency Content and Spectral Amplitudes of the Basin-Generated Surface Waves
Authors: Kamal, J. P. Narayan
Abstract:
In the present wok the effects of basin-shape-ratio on the frequency content and spectral amplitudes of the basin-generated surface waves and the associated spatial variation of ground motion amplification and differential ground motion in a 3D semi-spherical basin has been studied. A recently developed 3D fourth-order spatial accurate time-domain finite-difference (FD) algorithm based on the parsimonious staggered-grid approximation of the 3D viscoelastic wave equations was used to estimate seismic responses. The simulated results demonstrated the increase of both the frequency content and the spectral amplitudes of the basin-generated surface waves and the duration of ground motion in the basin with the increase of shape-ratio of semi-spherical basin. An increase of the average spectral amplification (ASA), differential ground motion (DGM) and the average aggravation factor (AAF) towards the centre of the semi-spherical basin was obtained.Keywords: 3D viscoelastic simulation, basin-generated surface waves, basin-shape-ratio effects, average spectral amplification, aggravation factors and differential ground motion
Procedia PDF Downloads 5081503 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads
Authors: Riaan Kleyn
Abstract:
Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.Keywords: computer vision, wine grapes, machine learning, machine harvested grapes
Procedia PDF Downloads 961502 Risk Mitigation of Data Causality Analysis Requirements AI Act
Authors: Raphaël Weuts, Mykyta Petik, Anton Vedder
Abstract:
Artificial Intelligence has the potential to create and already creates enormous value in healthcare. Prescriptive systems might be able to make the use of healthcare capacity more efficient. Such systems might entail interpretations that exclude the effect of confounders that brings risks with it. Those risks might be mitigated by regulation that prevents systems entailing such risks to come to market. One modality of regulation is that of legislation, and the European AI Act is an example of such a regulatory instrument that might mitigate these risks. To assess the risk mitigation potential of the AI Act for those risks, this research focusses on a case study of a hypothetical application of medical device software that entails the aforementioned risks. The AI Act refers to the harmonised norms for already existing legislation, here being the European medical device regulation. The issue at hand is a causal link between a confounder and the value the algorithm optimises for by proxy. The research identifies where the AI Act already looks at confounders (i.a. feedback loops in systems that continue to learn after being placed on the market). The research identifies where the current proposal by parliament leaves legal uncertainty on the necessity to check for confounders that do not influence the input of the system, when the system does not continue to learn after being placed on the market. The authors propose an amendment to article 15 of the AI Act that would require high-risk systems to be developed in such a way as to mitigate risks from those aforementioned confounders.Keywords: AI Act, healthcare, confounders, risks
Procedia PDF Downloads 2601501 Healthcare Big Data Analytics Using Hadoop
Authors: Chellammal Surianarayanan
Abstract:
Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare
Procedia PDF Downloads 4131500 Exploring the Intricate Microbiology of Street Cuisine: Delving into Potential Dangers in Order to Enhance Safety and Quality
Authors: Raana Babadi Fathipour
Abstract:
Street foods hold a significant place in the tapestry of socioeconomic and cultural norms, beloved across the globe. Serving as a convenient and affordable option for city dwellers seeking nourishment, these culinary delights also serve as a vital source of income for vendors, particularly women. Additionally, street food acts as a mirror reflecting traditional local customs and practices, an element that draws tourists to experience the authenticity of a culture firsthand. Despite its many virtues, concerns have emerged regarding the microbiological safety of street food worldwide. Often prepared and sold in subpar conditions without proper oversight or regulation, street food has become synonymous with potential health risks. The presence of elevated levels of fecal indicator bacteria and various pathogens in these unregulated delicacies further perpetuates anxieties surrounding their consumption. This analysis delves into the intricate microbiological intricacies inherent in street food, shedding light on the pertinent safety concerns and prevalent pathogens. Additionally, it elaborates on the worldwide standing of this vital economic endeavor. Moreover, it advocates for the adoption of molecular detection techniques over conventional culture-based methods to gain a more comprehensive grasp of the true microbial risks posed by street cuisine. Acknowledgment marks the initial step towards resolving any given issue.Keywords: foodborne pathogens, microbiological safety, street food, viruses
Procedia PDF Downloads 521499 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model
Procedia PDF Downloads 1511498 A Framework for Early Differential Diagnosis of Tropical Confusable Diseases Using the Fuzzy Cognitive Map Engine
Authors: Faith-Michael E. Uzoka, Boluwaji A. Akinnuwesi, Taiwo Amoo, Flora Aladi, Stephen Fashoto, Moses Olaniyan, Joseph Osuji
Abstract:
The overarching aim of this study is to develop a soft-computing system for the differential diagnosis of tropical diseases. These conditions are of concern to health bodies, physicians, and the community at large because of their mortality rates, and difficulties in early diagnosis due to the fact that they present with symptoms that overlap, and thus become ‘confusable’. We report on the first phase of our study, which focuses on the development of a fuzzy cognitive map model for early differential diagnosis of tropical diseases. We used malaria as a case disease to show the effectiveness of the FCM technology as an aid to the medical practitioner in the diagnosis of tropical diseases. Our model takes cognizance of manifested symptoms and other non-clinical factors that could contribute to symptoms manifestations. Our model showed 85% accuracy in diagnosis, as against the physicians’ initial hypothesis, which stood at 55% accuracy. It is expected that the next stage of our study will provide a multi-disease, multi-symptom model that also improves efficiency by utilizing a decision support filter that works on an algorithm, which mimics the physician’s diagnosis process.Keywords: medical diagnosis, tropical diseases, fuzzy cognitive map, decision support filters, malaria differential diagnosis
Procedia PDF Downloads 3201497 Hybrid Seismic Energy Dissipation Devices Made of Viscoelastic Pad and Steel Plate
Authors: Jinkoo Kim, Minsung Kim
Abstract:
This study develops a hybrid seismic energy dissipation device composed of a viscoelastic damper and a steel slit damper connected in parallel. A cyclic loading test is conducted on a test specimen to validate the seismic performance of the hybrid damper. Then a moment-framed model structure is designed without seismic load so that it is retrofitted with the hybrid dampers. The model structure is transformed into an equivalent simplified system to find out optimum story-wise damper distribution pattern using genetic algorithm. The effectiveness of the hybrid damper is investigated by fragility analysis and the life cycle cost evaluation of the structure with and without the dampers. The analysis results show that the model structure has reduced probability of reaching damage states, especially the complete damage state, after seismic retrofit. The expected damage cost and consequently the life cycle cost of the retrofitted structure turn out to be significantly small compared with those of the original structure. Acknowledgement: This research was supported by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R & D program (N043100016).Keywords: seismic retrofit, slit dampers, friction dampers, hybrid dampers
Procedia PDF Downloads 2831496 Gas Chromatography Coupled to Tandem Mass Spectrometry and Liquid Chromatography Coupled to Tandem Mass Spectrometry Qualitative Determination of Pesticides Found in Tea Infusions
Authors: Mihai-Alexandru Florea, Veronica Drumea, Roxana Nita, Cerasela Gird, Laura Olariu
Abstract:
The aim of this study was to investigate the residues of pesticide found in tea water infusions. A multi-residues method to determine 147 pesticides has been developed using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged, Safe) procedure and dispersive solid phase extraction (d-SPE) for the cleanup the pesticides from complex matrices such as plants and tea. Sample preparation was carefully optimized for the efficient removal of coextracted matrix components by testing more solvent systems. Determination of pesticides was performed using GC-MS/MS (100 of pesticides) and LC-MS/MS (47 of pesticides). The selected reaction monitoring (SRM) mode was chosen to achieve low detection limits and high compounds selectivity and sensitivity. Overall performance was evaluated and validated according to DG-SANTE Guidelines. To assess the pesticide residue transfer rate (qualitative) from dried tea in infusions the samples (tea) were spiked with a mixture of pesticides at the maximum residues level accepted for teas and herbal infusions. In order to investigate the release of the pesticides in tea preparations, the medicinal plants were prepared in four ways by variation of water temperature and the infusion time. The pesticides from infusions were extracted using two methods: QuEChERS versus solid-phase extraction (SPE). More that 90 % of the pesticides studied was identified in infusion.Keywords: tea, solid-phase extraction (SPE), selected reaction monitoring (SRM), QuEChERS
Procedia PDF Downloads 2131495 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter
Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas
Abstract:
This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.Keywords: biomass concentration, extended Kalman filter, particle filter, state estimation, specific growth rate
Procedia PDF Downloads 4301494 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers
Authors: Rajkumar Kolangarakandy
Abstract:
Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL
Procedia PDF Downloads 3351493 Multi Attribute Failure Mode Analysis of the Catering Systems: A Case Study of Sefako Makgatho Health Sciences University in South Africa
Authors: Mokoena Oratilwe Penwell, Seeletse Solly Matshonisa
Abstract:
The demand for quality products is a vital factor determining the success of a producing company, and the reality of this demand influences customer satisfaction. In Sefako Makgatho Health Sciences University (SMU), concerns over the quality of food being sold have been raised by mostly students and staff who are primary consumers of food being sold by the cafeteria. Suspicions of food poisoning and the occurrence of diarrhea-related to food from the cafeteria, amongst others, have been raised. However, minimal measures have been taken to resolve the issue of food quality. New service providers have been appointed, and still, the same trends are being observed, the quality of food seems to depreciate continuously. This paper uses multi-attribute failure mode analysis (MAFMA) for failure detection and minimization on the machines used for food production by SMU catering company before being sold to both staff, and students so as to improve production plant reliability, and performance. Analytical Hierarchy Process (AHP) will be used for the severity ranking of the weight criterions and development of the hierarchical structure for the cafeteria company. Amongst other potential issues detected, maintenance of the machines and equipment used for food preparations was of concern. Also, the staff lacked sufficient hospitality skills, supervision, and management in the cafeteria needed greater attention to mitigate some of the failures occurring in the food production plant.Keywords: MAFMA, food quality, maintenance, supervision
Procedia PDF Downloads 1351492 Application of Deep Neural Networks to Assess Corporate Credit Rating
Authors: Parisa Golbayani, Dan Wang, Ionut¸ Florescu
Abstract:
In this work we implement machine learning techniques to financial statement reports in order to asses company’s credit rating. Specifically, the work analyzes the performance of four neural network architectures (MLP, CNN, CNN2D, LSTM) in predicting corporate credit rating as issued by Standard and Poor’s. The paper focuses on companies from the energy, financial, and healthcare sectors in the US. The goal of this analysis is to improve application of machine learning algorithms to credit assessment. To accomplish this, the study investigates three questions. First, we investigate if the algorithms perform better when using a selected subset of important features or whether better performance is obtained by allowing the algorithms to select features themselves. Second, we address the temporal aspect inherent in financial data and study whether it is important for the results obtained by a machine learning algorithm. Third, we aim to answer if one of the four particular neural network architectures considered consistently outperforms the others, and if so under which conditions. This work frames the problem as several case studies to answer these questions and analyze the results using ANOVA and multiple comparison testing procedures.Keywords: convolutional neural network, long short term memory, multilayer perceptron, credit rating
Procedia PDF Downloads 2351491 Study on Optimization Design of Pressure Hull for Underwater Vehicle
Authors: Qasim Idrees, Gao Liangtian, Liu Bo, Miao Yiran
Abstract:
In order to improve the efficiency and accuracy of the pressure hull structure, optimization of underwater vehicle based on response surface methodology, a method for optimizing the design of pressure hull structure was studied. To determine the pressure shell of five dimensions as a design variable, the application of thin shell theory and the Chinese Classification Society (CCS) specification was carried on the preliminary design. In order to optimize variables of the feasible region, different methods were studied and implemented such as Opt LHD method (to determine the design test sample points in the feasible domain space), parametric ABAQUS solution for each sample point response, and the two-order polynomial response for the surface model of the limit load of structures. Based on the ultimate load of the structure and the quality of the shell, the two-generation genetic algorithm was used to solve the response surface, and the Pareto optimal solution set was obtained. The final optimization result was 41.68% higher than that of the initial design, and the shell quality was reduced by about 27.26%. The parametric method can ensure the accuracy of the test and improve the efficiency of optimization.Keywords: parameterization, response surface, structure optimization, pressure hull
Procedia PDF Downloads 2331490 Inversion of Electrical Resistivity Data: A Review
Authors: Shrey Sharma, Gunjan Kumar Verma
Abstract:
High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.Keywords: inversion, limitations, optimization, resistivity
Procedia PDF Downloads 3651489 The Trigger-DAQ System in the Mu2e Experiment
Authors: Antonio Gioiosa, Simone Doanti, Eric Flumerfelt, Luca Morescalchi, Elena Pedreschi, Gianantonio Pezzullo, Ryan A. Rivera, Franco Spinella
Abstract:
The Mu2e experiment at Fermilab aims to measure the charged-lepton flavour violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. With the expected experimental sensitivity, Mu2e will improve the previous limit of four orders of magnitude. The Mu2e data acquisition (DAQ) system provides hardware and software to collect digitized data from the tracker, calorimeter, cosmic ray veto, and beam monitoring systems. Mu2e’s trigger and data acquisition system (TDAQ) uses otsdaq as its solution. developed at Fermilab, otsdaq uses the artdaq DAQ framework and art analysis framework, under-the-hood, for event transfer, filtering, and processing. Otsdaq is an online DAQ software suite with a focus on flexibility and scalability while providing a multi-user, web-based interface accessible through the Chrome or Firefox web browser. The detector read out controller (ROC) from the tracker and calorimeter stream out zero-suppressed data continuously to the data transfer controller (DTC). Data is then read over the PCIe bus to a software filter algorithm that selects events which are finally combined with the data flux that comes from a cosmic ray veto system (CRV).Keywords: trigger, daq, mu2e, Fermilab
Procedia PDF Downloads 1551488 Microbial Contaminants in Drinking Water Collected from Different Regions of Kuwait
Authors: Abu Salim Mustafa
Abstract:
Water plays a major role in maintaining life on earth, but it can also serve as a matrix for pathogenic organisms, posing substantial health threats to humans. Although, outbreaks of diseases attributable to drinking water may not be common in industrialized countries, they still occur and can lead to serious acute, chronic, or sometimes fatal health consequences. The analysis of drinking water samples from different regions of Kuwait was performed in this study for bacterial and viral contaminations. Drinking tap water samples were collected from 15 different locations of the six Kuwait governorates. All samples were analyzed by confocal microscopy for the presence of bacteria. The samples were cultured in vitro to detect cultivable organisms. DNA was isolated from the cultured organisms and the identity of the bacteria was determined by sequencing the bacterial 16S rRNA genes, followed by BLAST analysis in the database of NCBI, USA. RNA was extracted from water samples and analyzed by real-time PCR for the detection of viruses with potential health risks, i.e. Astrovirus, Enterovirus, Norovirus, Rotavirus, and Hepatitis A. Confocal microscopy showed the presence of bacteria in some water samples. The 16S rRNA gene sequencing of culture grown organisms, followed by BLAST analysis, identified the presence of several non-pathogenic bacterial species. However, one sample had Acinetobacter baumannii, which often causes opportunistic infections in immunocompromised people, but none of the studied viruses could be detected in the drinking water samples analyzed. The results indicate that drinking water samples analyzed from various locations in Kuwait are relatively safe for drinking and do not contain many harmful pathogens.Keywords: drinking water, microbial contaminant, 16S rDNA, Kuwait
Procedia PDF Downloads 1551487 Role of Surfactant Protein D (SP-D) as a Biomarker of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infection
Authors: Lucia Salvioni, Pietro Giorgio Lovaglio, Valerio Leoni, Miriam Colombo, Luisa Fiandra
Abstract:
The involvement of plasmatic surfactant protein-D (SP-D) in pulmonary diseases has been long investigated, and over the last two years, more interest has been directed to determine its role as a marker of COVID-19. In this direction, several studies aimed to correlate pulmonary surfactant proteins with the clinical manifestations of the virus indicated SP-D as a prognostic biomarker of COVID-19 pneumonia severity. The present work has performed a retrospective study on a relatively large cohort of patients of Hospital Pio XI of Desio (Lombardia, Italy) with the aim to assess differences in the hematic SP-D concentrations among COVID-19 patients and healthy donors and the role of SP-D as a prognostic marker of severity and/or of mortality risk. The obtained results showed a significant difference in the mean of log SP-D levels between COVID-19 patients and healthy donors, so as between dead and survived patients. SP-D values were significantly higher for both hospitalized COVID-19 and dead patients, with threshold values of 150 and 250 ng/mL, respectively. SP-D levels at admission and increasing differences among follow-up and admission values resulted in the strongest significant risk factors of mortality. Therefore, this study demonstrated the role of SP-D as a predictive marker of SARS-CoV-2 infection and its outcome. A significant correlation of SP-D with patient mortality indicated that it is also a prognostic factor in terms of mortality, and its early detection should be considered to design adequate preventive treatments for COVID-19 patients.Keywords: SARS-CoV-2 infection, COVID-19, surfactant protein-D (SP-D), mortality, biomarker
Procedia PDF Downloads 761486 Grain Size Characteristics and Sediments Distribution in the Eastern Part of Lekki Lagoon
Authors: Mayowa Philips Ibitola, Abe Oluwaseun Banji, Olorunfemi Akinade-Solomon
Abstract:
A total of 20 bottom sediment samples were collected from the Lekki Lagoon during the wet and dry season. The study was carried out to determine the textural characteristics, sediment distribution pattern and energy of transportation within the lagoon system. The sediment grain sizes and depth profiling was analyzed using dry sieving method and MATLAB algorithm for processing. The granulometric reveals fine grained sand both for the wet and dry season with an average mean value of 2.03 ϕ and -2.88 ϕ, respectively. Sediments were moderately sorted with an average inclusive standard deviation of 0.77 ϕ and -0.82 ϕ. Skewness varied from strongly coarse and near symmetrical 0.34- ϕ and 0.09 ϕ. The kurtosis average value was 0.87 ϕ and -1.4 ϕ (platykurtic and leptokurtic). Entirely, the bathymetry shows an average depth of 4.0 m. The deepest and shallowest area has a depth of 11.2 m and 0.5 m, respectively. High concentration of fine sand was observed at deep areas compared to the shallow areas during wet and dry season. Statistical parameter results show that the overall sediments are sorted, and deposited under low energy condition over a long distance. However, sediment distribution and sediment transport pattern of Lekki Lagoon is controlled by a low energy current and the down slope configuration of the bathymetry enhances the sorting and the deposition rate in the Lekki Lagoon.Keywords: Lekki Lagoon, Marine sediment, bathymetry, grain size distribution
Procedia PDF Downloads 2311485 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst
Authors: Totsaporn Suwannaruang, Kitirote Wantala
Abstract:
The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation
Procedia PDF Downloads 1571484 Helicopter Exhaust Gases Cooler in Terms of Computational Fluid Dynamics (CFD) Analysis
Authors: Mateusz Paszko, Ksenia Siadkowska
Abstract:
Due to the low-altitude and relatively low-speed flight, helicopters are easy targets for actual combat assets e.g. infrared-guided missiles. Current techniques aim to increase the combat effectiveness of the military helicopters. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. One of them is cooling hot exhaust gasses, emitting from the engines to the atmosphere in special heat exchangers. Nowadays, this process is realized in ejective coolers, where strong heat and momentum exchange between hot exhaust gases and cold air ejected from atmosphere takes place. Flow effects of air, exhaust gases; mixture of those two and the heat transfer between cold air and hot exhaust gases are given by differential equations of: Mass transportation–flow continuity, ejection of cold air through expanding exhaust gasses, conservation of momentum, energy and physical relationship equations. Calculation of those processes in ejective cooler by means of classic mathematical analysis is extremely hard or even impossible. Because of this, it is necessary to apply the numeric approach with modern, numeric computer programs. The paper discussed the general usability of the Computational Fluid Dynamics (CFD) in a process of projecting the ejective exhaust gases cooler cooperating with helicopter turbine engine. In this work, the CFD calculations have been performed for ejective-based cooler cooperating with the PA W3 helicopter’s engines.Keywords: aviation, CFD analysis, ejective-cooler, helicopter techniques
Procedia PDF Downloads 3321483 Effect of Atmospheric Turbulence on Hybrid FSO/RF Link Availability under Qatar's Harsh Climate
Authors: Abir Touati, Syed Jawad Hussain, Farid Touati, Ammar Bouallegue
Abstract:
Although there has been a growing interest in the hybrid free-space optical link and radio frequency FSO/RF communication system, the current literature is limited to results obtained in moderate or cold environment. In this paper, using a soft switching approach, we investigate the effect of weather inhomogeneities on the strength of turbulence hence the channel refractive index under Qatar harsh environment and their influence on the hybrid FSO/RF availability. In this approach, either FSO/RF or simultaneous or none of them can be active. Based on soft switching approach and a finite state Markov Chain (FSMC) process, we model the channel fading for the two links and derive a mathematical expression for the outage probability of the hybrid system. Then, we evaluate the behavior of the hybrid FSO/RF under hazy and harsh weather. Results show that the FSO/RF soft switching renders the system outage probability less than that of each link individually. A soft switching algorithm is being implemented on FPGAs using Raptor code interfaced to the two terminals of a 1Gbps/100 Mbps FSO/RF hybrid system, the first being implemented in the region. Experimental results are compared to the above simulation results.Keywords: atmospheric turbulence, haze, hybrid FSO/RF, outage probability, refractive index
Procedia PDF Downloads 4191482 Modeling Anisotropic Damage Algorithms of Metallic Structures
Authors: Bahar Ayhan
Abstract:
The present paper is concerned with the numerical modeling of the inelastic behavior of the anisotropically damaged ductile materials, which are based on a generalized macroscopic theory within the framework of continuum damage mechanics. Kinematic decomposition of the strain rates into elastic, plastic and damage parts is basis for accomplishing the structure of continuum theory. The evolution of the damage strain rate tensor is detailed with the consideration of anisotropic effects. Helmholtz free energy functions are constructed separately for the elastic and inelastic behaviors in order to be able to address the plastic and damage process. Additionally, the constitutive structure, which is based on the standard dissipative material approach, is elaborated with stress tensor, a yield criterion for plasticity and a fracture criterion for damage besides the potential functions of each inelastic phenomenon. The finite element method is used to approximate the linearized variational problem. Stress and strain outcomes are solved by using the numerical integration algorithm based on operator split methodology with a plastic and damage (multiplicator) variable separately. Numerical simulations are proposed in order to demonstrate the efficiency of the formulation by comparing the examples in the literature.Keywords: anisotropic damage, finite element method, plasticity, coupling
Procedia PDF Downloads 206