Search results for: elliptic curve digital signature algorithm
4630 Pattern Synthesis of Nonuniform Linear Arrays Including Mutual Coupling Effects Based on Gaussian Process Regression and Genetic Algorithm
Authors: Ming Su, Ziqiang Mu
Abstract:
This paper proposes a synthesis method for nonuniform linear antenna arrays that combine Gaussian process regression (GPR) and genetic algorithm (GA). In this method, the GPR model can be used to calculate the array radiation pattern in the presence of mutual coupling effects, and then the GA is used to optimize the excitations and locations of the elements so as to generate the desired radiation pattern. In this paper, taking a 9-element nonuniform linear array as an example and the desired radiation pattern corresponding to a Chebyshev distribution as the optimization objective, optimize the excitations and locations of the elements. Finally, the optimization results are verified by electromagnetic simulation software CST, which shows that the method is effective.Keywords: nonuniform linear antenna arrays, GPR, GA, mutual coupling effects, active element pattern
Procedia PDF Downloads 1104629 Prediction of Fire Growth of the Office by Real-Scale Fire Experiment
Authors: Kweon Oh-Sang, Kim Heung-Youl
Abstract:
Estimating the engineering properties of fires is important to be prepared for the complex and various fire risks of large-scale structures such as super-tall buildings, large stadiums, and multi-purpose structures. In this study, a mock-up of a compartment which was 2.4(L) x 3.6 (W) x 2.4 (H) meter in dimensions was fabricated at the 10MW LSC (Large Scale Calorimeter) and combustible office supplies were placed in the compartment for a real-scale fire test. Maximum heat release rate was 4.1 MW and total energy release obtained through the application of t2 fire growth rate was 6705.9 MJ.Keywords: fire growth, fire experiment, t2 curve, large scale calorimeter
Procedia PDF Downloads 3384628 Applying Hybrid Graph Drawing and Clustering Methods on Stock Investment Analysis
Authors: Mouataz Zreika, Maria Estela Varua
Abstract:
Stock investment decisions are often made based on current events of the global economy and the analysis of historical data. Conversely, visual representation could assist investors’ gain deeper understanding and better insight on stock market trends more efficiently. The trend analysis is based on long-term data collection. The study adopts a hybrid method that combines the Clustering algorithm and Force-directed algorithm to overcome the scalability problem when visualizing large data. This method exemplifies the potential relationships between each stock, as well as determining the degree of strength and connectivity, which will provide investors another understanding of the stock relationship for reference. Information derived from visualization will also help them make an informed decision. The results of the experiments show that the proposed method is able to produced visualized data aesthetically by providing clearer views for connectivity and edge weights.Keywords: clustering, force-directed, graph drawing, stock investment analysis
Procedia PDF Downloads 3024627 Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar
Authors: Tareq AlNuaim, Mubashir Alam, Abdulrazaq Aldowesh
Abstract:
The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-droneKeywords: digital beamforming, drone detection, micro-UAV, MIMO, phased array
Procedia PDF Downloads 1394626 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 1244625 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks
Authors: Paul Shize Li, Frank Alber
Abstract:
A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation
Procedia PDF Downloads 1684624 Elephant Herding Optimization for Service Selection in QoS-Aware Web Service Composition
Authors: Samia Sadouki Chibani, Abdelkamel Tari
Abstract:
Web service composition combines available services to provide new functionality. Given the number of available services with similar functionalities and different non functional aspects (QoS), the problem of finding a QoS-optimal web service composition is considered as an optimization problem belonging to NP-hard class. Thus, an optimal solution cannot be found by exact algorithms within a reasonable time. In this paper, a meta-heuristic bio-inspired is presented to address the QoS aware web service composition; it is based on Elephant Herding Optimization (EHO) algorithm, which is inspired by the herding behavior of elephant group. EHO is characterized by a process of dividing and combining the population to sub populations (clan); this process allows the exchange of information between local searches to move toward a global optimum. However, with Applying others evolutionary algorithms the problem of early stagnancy in a local optimum cannot be avoided. Compared with PSO, the results of experimental evaluation show that our proposition significantly outperforms the existing algorithm with better performance of the fitness value and a fast convergence.Keywords: bio-inspired algorithms, elephant herding optimization, QoS optimization, web service composition
Procedia PDF Downloads 3274623 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 3294622 Operations Training Using Immersive Technologies: A Development Experience
Authors: A. Aman, S. M. Tang, F. H. Alharrassy
Abstract:
Omanisation was established to increase job opportunities for national employment in Sultanate of Oman. With half of the population below 25 years of age, the sultanate is striving to diversify the economy fast enough to meet the burgeoning number of jobseekers annually. On the other hand, training personnel to be competent oil and gas operators and technicians is a difficult task in a complex reservoir structures in Oman using highly advanced and sophisticated extracting processes. Coupled towards Omanisation which encourages nationals into the oil and gas sector so as to create sustainable employment for the local population, the challenge to churn out competent manpower became a daunting task. Immersive technologies provided the impetus to create a new digital media sector which provided job opportunities as well as the learning contents to enhance the competency-based training for the oil and gas sector in the Sultanate. This lead to a win-win-win collaboration amongst the government represented by the Information Technology Authority (ITA), private sector specialised company (represented by ASM Technologies), jobseekers and oil and gas organisations. This is also one of the first private-public partnership model in the Information Communication Technology (ICT) sector in Oman. A pilot phase was conducted for 8 months to develop four virtual applications for training in equipment and process engineering; oil rig familiarisation, Health Safety Environment (HSE) application, turbine application and the mechanical vapour compressor (MVC) water recycling plant in order to enhance the competency level of the trainees. The immersive applications were installed in operational settings which enabled new employees to practice and understand various processes and procedures regarding enhanced oil recovery. Existing employees used the application to review the working principles in order to carry out troubleshooting scenarios. Concurrently, these applications were also developed by local Omani resources within the country. This created job opportunities for job-seekers as well the establishment of a digital media sector. The purpose of this paper is to discuss how immersive technologies can enhance operational competencies, create job and establish a digital media sector in the Sultanate of Oman.Keywords: immersive, virtual reality, operations training, Omanisation
Procedia PDF Downloads 2314621 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models
Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah
Abstract:
In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model
Procedia PDF Downloads 2424620 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 3124619 Optimal Reactive Power Dispatch under Various Contingency Conditions Using Whale Optimization Algorithm
Authors: Khaled Ben Oualid Medani, Samir Sayah
Abstract:
The Optimal Reactive Power Dispatch (ORPD) problem has been solved and analysed usually in the normal conditions. However, network collapses appear in contingency conditions. In this paper, ORPD under several contingencies is presented using the proposed method WOA. To ensure viability of the power system in contingency conditions, several critical cases are simulated in order to prevent and prepare the power system to face such situations. The results obtained are carried out in IEEE 30 bus test system for the solution of ORPD problem in which control of bus voltages, tap position of transformers and reactive power sources are involved. Moreover, another method, namely, Particle Swarm Optimization with Time Varying Acceleration Coefficient (PSO-TVAC) has been compared with the proposed technique. Simulation results indicate that the proposed WOA gives remarkable solution in terms of effectiveness in case of outages.Keywords: optimal reactive power dispatch, power system analysis, real power loss minimization, contingency condition, metaheuristic technique, whale optimization algorithm
Procedia PDF Downloads 1214618 Architectural Adaptation for Road Humps Detection in Adverse Light Scenario
Authors: Padmini S. Navalgund, Manasi Naik, Ujwala Patil
Abstract:
Road hump is a semi-cylindrical elevation on the road made across specific locations of the road. The vehicle needs to maneuver the hump by reducing the speed to avoid car damage and pass over the road hump safely. Road Humps on road surfaces, if identified in advance, help to maintain the security and stability of vehicles, especially in adverse visibility conditions, viz. night scenarios. We have proposed a deep learning architecture adaptation by implementing the MISH activation function and developing a new classification loss function called "Effective Focal Loss" for Indian road humps detection in adverse light scenarios. We captured images comprising of marked and unmarked road humps from two different types of cameras across South India to build a heterogeneous dataset. A heterogeneous dataset enabled the algorithm to train and improve the accuracy of detection. The images were pre-processed, annotated for two classes viz, marked hump and unmarked hump. The dataset from these images was used to train the single-stage object detection algorithm. We utilised an algorithm to synthetically generate reduced visible road humps scenarios. We observed that our proposed framework effectively detected the marked and unmarked hump in the images in clear and ad-verse light environments. This architectural adaptation sets up an option for early detection of Indian road humps in reduced visibility conditions, thereby enhancing the autonomous driving technology to handle a wider range of real-world scenarios.Keywords: Indian road hump, reduced visibility condition, low light condition, adverse light condition, marked hump, unmarked hump, YOLOv9
Procedia PDF Downloads 274617 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1904616 Image Compression on Region of Interest Based on SPIHT Algorithm
Authors: Sudeepti Dayal, Neelesh Gupta
Abstract:
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently.Keywords: Compression ratio, DWT, SPIHT, DCT
Procedia PDF Downloads 3494615 Spectrum Assignment Algorithms in Optical Networks with Protection
Authors: Qusay Alghazali, Tibor Cinkler, Abdulhalim Fayad
Abstract:
In modern optical networks, the flex grid spectrum usage is most widespread, where higher bit rate streams get larger spectrum slices while lower bit rate traffic streams get smaller spectrum slices. To our practice, under the ITU-T recommendation, G.694.1, spectrum slices of 50, 75, and 100 GHz are being used with central frequency at 193.1 THz. However, when these spectrum slices are not sufficient, multiple spectrum slices can use either one next to another or anywhere in the optical wavelength. In this paper, we propose the analysis of the wavelength assignment problem. We compare different algorithms for this spectrum assignment with and without protection. As a reference for comparisons, we concluded that the Integer Linear Programming (ILP) provides the global optimum for all cases. The most scalable algorithm is the greedy one, which yields results in subsequent ranges even for more significant network instances. The algorithms’ benchmark implemented using the LEMON C++ optimization library and simulation runs based on a minimum number of spectrum slices assigned to lightpaths and their execution time.Keywords: spectrum assignment, integer linear programming, greedy algorithm, international telecommunication union, library for efficient modeling and optimization in networks
Procedia PDF Downloads 1694614 Dynamic Store Procedures in Database
Authors: Muhammet Dursun Kaya, Hasan Asil
Abstract:
In recent years, different methods have been proposed to optimize question processing in database. Although different methods have been proposed to optimize the query, but the problem which exists here is that most of these methods destroy the query execution plan after executing the query. This research attempts to solve the above problem by using a combination of methods of communicating with the database (the present questions in the programming code and using store procedures) and making query processing adaptive in database, and proposing a new approach for optimization of query processing by introducing the idea of dynamic store procedures. This research creates dynamic store procedures in the database according to the proposed algorithm. This method has been tested on applied software and results shows a significant improvement in reducing the query processing time and also reducing the workload of DBMS. Other advantages of this algorithm include: making the programming environment a single environment, eliminating the parametric limitations of the stored procedures in the database, making the stored procedures in the database dynamic, etc.Keywords: relational database, agent, query processing, adaptable, communication with the database
Procedia PDF Downloads 3734613 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 2154612 Development of an Efficient Algorithm for Cessna Citation X Speed Optimization in Cruise
Authors: Georges Ghazi, Marc-Henry Devillers, Ruxandra M. Botez
Abstract:
Aircraft flight trajectory optimization has been identified to be a promising solution for reducing both airline costs and the aviation net carbon footprint. Nowadays, this role has been mainly attributed to the flight management system. This system is an onboard multi-purpose computer responsible for providing the crew members with the optimized flight plan from a destination to the next. To accomplish this function, the flight management system uses a variety of look-up tables to compute the optimal speed and altitude for each flight regime instantly. Because the cruise is the longest segment of a typical flight, the proposed algorithm is focused on minimizing fuel consumption for this flight phase. In this paper, a complete methodology to estimate the aircraft performance and subsequently compute the optimal speed in cruise is presented. Results showed that the obtained performance database was accurate enough to predict the flight costs associated with the cruise phase.Keywords: Cessna Citation X, cruise speed optimization, flight cost, cost index, and golden section search
Procedia PDF Downloads 2924611 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 4284610 Ecosystems: An Analysis of Generation Z News Consumption, Its Impact on Evolving Concepts and Applications in Journalism
Authors: Bethany Wood
Abstract:
The world pandemic led to a change in the way social media was used by audiences, with young people spending more hours on the platform due to lockdown. Reports by Ofcom have demonstrated that the internet is the second most popular platform for accessing news after television in the UK with social media and the internet ranked as the most popular platform to access news for those aged between 16-24. These statistics are unsurprising considering that at the time of writing, 98 percent of Generation Z (Gen Z) owned a smartphone and the subsequent ease and accessibility of social media. Technology is constantly developing and with this, its importance is becoming more prevalent with each generation: the Baby Boomers (1946-1964) consider it something useful whereas millennials (1981-1997) believe it a necessity for day to day living. Gen Z, otherwise known as the digital native, have grown up with this technology at their fingertips and social media is a norm. It helps form their identity, their affiliations and opens gateways for them to engage with news in a new way. It is a common misconception that Gen Z do not consume news, they are simply doing so in a different way to their predecessors. Using a sample of 800 18-20 year olds whilst utilising Generational theory, Actor Network Theory and the Social Shaping of Technology, this research provides a critical analyse regarding how Gen Z’s news consumption and engagement habits are developing along with technology to sculpture the future format of news and its distribution. From that perspective, allied with the empirical approach, it is possible to provide research orientated advice for the industry and even help to redefine traditional concepts of journalism.Keywords: journalism, generation z, digital, social media
Procedia PDF Downloads 864609 Automatic Detection of Proliferative Cells in Immunohistochemically Images of Meningioma Using Fuzzy C-Means Clustering and HSV Color Space
Authors: Vahid Anari, Mina Bakhshi
Abstract:
Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.Keywords: positive cell, color segmentation, HSV color space, immunohistochemistry, meningioma, thresholding, fuzzy c-means
Procedia PDF Downloads 2104608 Loading Forces following Addition of 5% Cu in Nickel-Titanium Alloy Used for Orthodontics
Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Wassana Wichai
Abstract:
Aims: This study aims to address the amount of force delivered by a NiTiCu orthodontic wire with a ternary composition ratio of 46.0 Ni: 49.0 Ti: 5.0 Cu and to compare the results with a commercial NiTiCu 35 °C orthodontic archwire. Materials and Methods: Nickel (purity 99.9%), Titanium (purity 99.9%), and Copper (purity 99.9%) were used in this study with the atomic weight ratio 46.0 Ni: 49.0 Ti: 5.0 Cu. The elements were melted to form an alloy using an electrolytic arc furnace in argon gas atmosphere and homogenized at 800 °C for 1 hr. The alloys were subsequently sliced into thin plates (1.5mm) by EDM wire cutting machine to obtain the specimens and were cold-rolled with 30% followed by heat treatment in a furnace at 400 °C for 1 hour. Then, the three newly fabricated NiTiCu specimens were cut in nearly identical wire sizes of 0.016 inch x0.022 inch. Commercial preformed Ormco NiTiCu35 °C archwire with size 0.016 inch x 0.022 inches were used for comparative purposes. Three-point bending test was performed using a Universal Testing Machine to investigate the force of the load-deflection curve at oral temperature (36 °C+ 1) with deflection points at 0.25, 0.5, 0.75, 1.0. 1.25, and 1.5 mm. Descriptive statistics was used to evaluate each variables and independent t-test was used to analyze the differences between the groups. Results: Both NiTiCu wires presented typical superelastic properties as observed from the load-deflection curve. The average force was 341.70 g for loading, and 264.18 g for unloading for 46.0 Ni: 49.0 Ti: 5.0 Cu wire. Similarly, the values were 299.88 g for loading, and 201.96 g for unloading of Ormco NiTiCu35°C. There were significant differences (p < 0.05) in mean loading and unloading forces between the two NiTiCu wires. The deflection forces in loading and unloading force for Ormco NiTiCu at each point were less than 46.0 Ni: 49.0 Ti: 5.0 Cu wire, except at the deflection point of 0.25mm. Regarding the force difference between each deflection point of loading and unloading force, Ormco NiTiCu35 °C exerted less force than 46.0 Ni: 49.0 Ti: 5.0 Cu wire, except at difference deflection at 1.5-1.25 mm of unloading force. However, there were still within the acceptable limits for orthodontic use. Conclusion: The fabricated ternary alloy of 46.0 Ni: 49.0 Ti: 5.0 Cu (atomic weight) with 30% reduction and heat treatment at 400°C for 1 hr. and Ormco 35 °C NiTiCu presented the characteristics of the shape memory in their wire form. The unloading forces of both NiTiCu wires were in the range of orthodontic use. This should be a good foundation for further studies towards development of new orthodontic NiTiCu archwires.Keywords: loading force, ternary alloy, NiTiCu, shape memory, orthodontic wire
Procedia PDF Downloads 2854607 Impact on the Results of Sub-Group Analysis on Performance of Recommender Systems
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
The purpose of this study is to investigate whether friendship in social media can be an important factor in recommender system through social scientific analysis of friendship in popular social media such as Facebook and Twitter. For this purpose, this study analyzes data on friendship in real social media using component analysis and clique analysis among sub-group analysis in social network analysis. In this study, we propose an algorithm to reflect the results of sub-group analysis on the recommender system. The key to this algorithm is to ensure that recommendations from users in friendships are more likely to be reflected in recommendations from users. As a result of this study, outcomes of various subgroup analyzes were derived, and it was confirmed that the results were different from the results of the existing recommender system. Therefore, it is considered that the results of the subgroup analysis affect the recommendation performance of the system. Future research will attempt to generalize the results of the research through further analysis of various social data.Keywords: sub-group analysis, social media, social network analysis, recommender systems
Procedia PDF Downloads 3654606 Transformer Design Optimization Using Artificial Intelligence Techniques
Authors: Zakir Husain
Abstract:
Main objective of a power transformer design optimization problem requires minimizing the total overall cost and/or mass of the winding and core material by satisfying all possible constraints obligatory by the standards and transformer user requirement. The constraints include appropriate limits on winding fill factor, temperature rise, efficiency, no-load current and voltage regulation. The design optimizations tasks are a constrained minimum cost and/or mass solution by optimally setting the parameters, geometry and require magnetic properties of the transformer. In this paper, present the above design problems have been formulated by using genetic algorithm (GA) and simulated annealing (SA) on the MATLAB platform. The importance of the presented approach is stems for two main features. First, proposed technique provides reliable and efficient solution for the problem of design optimization with several variables. Second, it guaranteed to obtained solution is global optimum. This paper includes a demonstration of the application of the genetic programming GP technique to transformer design.Keywords: optimization, power transformer, genetic algorithm (GA), simulated annealing technique (SA)
Procedia PDF Downloads 5844605 The Use of Venous Glucose, Serum Lactate and Base Deficit as Biochemical Predictors of Mortality in Polytraumatized Patients: Acomparative with Trauma and Injury Severity Score and Acute Physiology and Chronic Health Evalution IV
Authors: Osama Moustafa Zayed
Abstract:
Aim of the work: To evaluate the effectiveness of venous glucose, levels of serum lactate and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients. Compared to the predictive value of Trauma and injury severity (TRISS) and Acute Physiology And Chronic Health Evaluation IV (APACHE IV). Introduction: Trauma is a serious global health problem, accounting for approximately one in 10 deaths worldwide. Trauma accounts for 5 million deaths per year. Prediction of mortality in trauma patients is an important part of trauma care. Several trauma scores have been devised to predict injury severity and risk of mortality. The trauma and injury severity score (TRISS) was most common used. Regardless of the accuracy of trauma scores, is based on an anatomical description of every injury and cannot be assigned to the patients until a full diagnostic procedure has been performed. So we hypothesized that alterations in admission glucose, lactate levels and base deficit would be an early and easy rapid predictor of mortality. Patient and Method: a comparative cross-sectional study. 282 Polytraumatized patients attended to the Emergency Department(ED) of the Suez Canal university Hospital constituted. The period from 1/1/2012 to 1/4/2013 was included. Results: We found that the best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients is = 90, with 77% sensitivity and 89% specificity using area under the ROC curve (0.89) at (95%CI). APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar (RBS) for prediction of mortality was>140 mg/dl, with 89%, sensitivity, 49% specificity. The best cut off value of base deficit for prediction of mortality was less than -5.6 with 64% sensitivity, 93% specificity. The best cutoff point of lactate for prediction of mortality was > 2.6 mmol/L with 92%, sensitivity, 42% specificity. Conclusion: According to our results from all evaluated predictors of mortality (laboratory and scores) and mortality based on the estimated cutoff values using ROC curves analysis, the highest risk of mortality was found using a cutoff value of 90 in TRISS score while with laboratory parameters the highest risk of mortality was with serum lactate > 2.6 . Although that all of the three parameter are accurate in predicting mortality in poly-traumatized patients and near with each other, as in serum lactate the area under the curve 0.82, in BD 0.79 and 0.77 in RBS.Keywords: APACHE IV, emergency department, polytraumatized patients, serum lactate
Procedia PDF Downloads 2954604 Signal Integrity Performance Analysis in Capacitive and Inductively Coupled Very Large Scale Integration Interconnect Models
Authors: Mudavath Raju, Bhaskar Gugulothu, B. Rajendra Naik
Abstract:
The rapid advances in Very Large Scale Integration (VLSI) technology has resulted in the reduction of minimum feature size to sub-quarter microns and switching time in tens of picoseconds or even less. As a result, the degradation of high-speed digital circuits due to signal integrity issues such as coupling effects, clock feedthrough, crosstalk noise and delay uncertainty noise. Crosstalk noise in VLSI interconnects is a major concern and reduction in VLSI interconnect has become more important for high-speed digital circuits. It is the most effectively considered in Deep Sub Micron (DSM) and Ultra Deep Sub Micron (UDSM) technology. Increasing spacing in-between aggressor and victim line is one of the technique to reduce the crosstalk. Guard trace or shield insertion in-between aggressor and victim is also one of the prominent options for the minimization of crosstalk. In this paper, far end crosstalk noise is estimated with mutual inductance and capacitance RLC interconnect model. Also investigated the extent of crosstalk in capacitive and inductively coupled interconnects to minimizes the same through shield insertion technique.Keywords: VLSI, interconnects, signal integrity, crosstalk, shield insertion, guard trace, deep sub micron
Procedia PDF Downloads 1864603 Microbial Diversity Assessment in Household Point-of-Use Water Sources Using Spectroscopic Approach
Authors: Syahidah N. Zulkifli, Herlina A. Rahim, Nurul A. M. Subha
Abstract:
Sustaining water quality is critical in order to avoid any harmful health consequences for end-user consumers. The detection of microbial impurities at the household level is the foundation of water security. Water quality is now monitored only at water utilities or infrastructure, such as water treatment facilities or reservoirs. This research provides a first-hand scientific understanding of microbial composition presence in Malaysia’s household point-of-use (POUs) water supply influenced by seasonal fluctuations, standstill periods, and flow dynamics by using the NIR-Raman spectroscopic technique. According to the findings, 20% of water samples were contaminated by pathogenic bacteria, which are Legionella and Salmonella cells. A comparison of the spectra reveals significant signature peaks (420 cm⁻¹ to 1800 cm⁻¹), including species-specific bands. This demonstrates the importance of regularly monitoring POUs water quality to provide a safe and clean water supply to homeowners. Conventional Raman spectroscopy, up-to-date, is no longer suited for real-time monitoring. Therefore, this study introduced an alternative micro-spectrometer to give a rapid and sustainable way of monitoring POUs water quality. Assessing microbiological threats in water supply becomes more reliable and efficient by leveraging IoT protocol.Keywords: microbial contaminants, water quality, water monitoring, Raman spectroscopy
Procedia PDF Downloads 1104602 An Effective Noise Resistant Frequency Modulation Continuous-Wave Radar Vital Sign Signal Detection Method
Authors: Lu Yang, Meiyang Song, Xiang Yu, Wenhao Zhou, Chuntao Feng
Abstract:
To address the problem that the FM continuous-wave radar (FMCW) extracts human vital sign signals which are susceptible to noise interference and low reconstruction accuracy, a new detection scheme for the sign signals is proposed. Firstly, an improved complete ensemble empirical modal decomposition with adaptive noise (ICEEMDAN) algorithm is applied to decompose the radar-extracted thoracic signals to obtain several intrinsic modal functions (IMF) with different spatial scales, and then the IMF components are optimized by a BP neural network improved by immune genetic algorithm (IGA). The simulation results show that this scheme can effectively separate the noise and accurately extract the respiratory and heartbeat signals and improve the reconstruction accuracy and signal-to-noise ratio of the sign signals.Keywords: frequency modulated continuous wave radar, ICEEMDAN, BP neural network, vital signs signal
Procedia PDF Downloads 1664601 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 234