Search results for: facial features
140 A Microcontroller Implementation of Model Predictive Control
Authors: Amira Abbes Kheriji, Faouzi Bouani, Mekki Ksouri, Mohamed Ben Ahmed
Abstract:
Model Predictive Control (MPC) is increasingly being proposed for real time applications and embedded systems. However comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprises as well as a transformer of organizations and markets. Recently, advances in microelectronics and software allow such technique to be implemented in embedded systems. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In fact in this paper, we propose an efficient framework for implementation of Generalized Predictive Control (GPC) in the performed STM32 microcontroller. The STM32 keil starter kit based on a JTAG interface and the STM32 board was used to implement the proposed GPC firmware. Besides the GPC, the PID anti windup algorithm was also implemented using Keil development tools designed for ARM processor-based microcontroller devices and working with C/Cµ langage. A performances comparison study was done between both firmwares. This performances study show good execution speed and low computational burden. These results encourage to develop simple predictive algorithms to be programmed in industrial standard hardware. The main features of the proposed framework are illustrated through two examples and compared with the anti windup PID controller.Keywords: Embedded systems, Model Predictive Control, microcontroller, Keil tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5497139 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor
Authors: Barenten Suciu
Abstract:
In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.
Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352138 High-Rises and Urban Design: The Reasons for Unsuccessful Placemaking with Residential High-Rises in England
Authors: E. Kalcheva, A. Taki, Y. Hadi
Abstract:
High-rises and placemaking is an understudied combination which receives more and more interest with the proliferation of this typology in many British cities. The reason for studying three major cities in England: London, Birmingham and Manchester, is to learn from the latest advances in urban design in well-developed and prominent urban environment. The analysis of several high-rise sites reveals the weaknesses in urban design of contemporary British cities and presents an opportunity to study from the implemented examples. Therefore, the purpose of this research is to analyze design approaches towards creating a sustainable and varied urban environment when high-rises are involved. The research questions raised by the study are: what is the quality of high-rises and their surroundings; what facilities and features are deployed in the research area; what is the role of the high-rise buildings in the placemaking process; what urban design principles are applicable in this context. The methodology utilizes observation of the researched area by structured questions, developed by the author to evaluate the outdoor qualities of the high-rise surroundings. In this context, the paper argues that the quality of the public realm around the high-rises is quite low, missing basic but vital elements such as plazas, public art, and seating, along with landscaping and pocket parks. There is lack of coherence, the rhythm of the streets is often disrupted, and even though the high-rises are very aesthetically appealing, they fail to create a sense of place on their own. The implications of the study are that future planning can take into consideration the critique in this article and provide more opportunities for urban design interventions around high-rise buildings in the British cities.
Keywords: High-rises, placemaking, urban design, townscape.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028137 Personalized Applications for Advanced Healthcare through AI-ML and Blockchain
Authors: Anuja Vyas, Aikel Indurkhya, Hari Krishna Garg
Abstract:
Nearly 25 years have passed since the landmark publication of the Human Genome Project, yet scientists have only begun to scratch the surface of its potential benefits. To bridge this gap, a personalized genomic application has been envisioned as a transformative tool accessible to people worldwide. This innovative solution proposes an integrated framework combining blockchain technology, genome-specific applications, and data compression techniques, ensuring operations to be swift, secure, transparent, and space-efficient. The software harnesses advanced Artificial Intelligence and Machine Learning methodologies, such as neural networks, evaluation matrices, fuzzy logic, and expert systems, to analyze individual genomic data. It generates personalized reports by comparing a user's genome with a reference genome, highlighting significant differences. Blockchain technology, with its inherent security, encryption, and immutability features, is leveraged for robust data transport and storage. In addition, a 'Data Abbreviation' technique ensures that genetic data and reports occupy minimal space. This integrated approach promises to be a significant leap forward, potentially transforming human health and well-being on a global scale.
Keywords: Artificial intelligence in genomics, blockchain technology, data abbreviation, data compression, data security in genomics, data storage, expert systems, fuzzy logic, genome applications, genomic data analysis, human genome project, neural networks, personalized genomics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40136 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758135 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062134 Constructing Masculinity through Images: Content Analysis of Lifestyle Magazines in Croatia
Authors: Marija Lončar, Zorana Šuljug Vučica, Magdalena Nigoević
Abstract:
Diverse social, cultural and economic trends and changes in contemporary societies influence the ways masculinity is represented in a variety of media. Masculinity is constructed within media images as a dynamic process that changes slowly over time and is shaped by various social factors. In many societies, dominant masculinity is still associated with authority, heterosexuality, marriage, professional and financial success, ethnic dominance and physical strength. But contemporary media depict men in ways that suggest a change in the approach to media images. The number of media images of men, which promote men’s identity through their body, have increased. With the male body more scrutinized and commodified, it is necessary to highlight how the body is represented and which visual elements are crucial since the body has an important role in the construction of masculinities. The study includes content analysis of male body images in the advertisements of different men’s and women’s lifestyle magazines available in Croatia. The main aim was to explore how masculinities are currently being portrayed through body regarding age, physical appearance, fashion, touch and gaze. The findings are also discussed in relation to female images since women are central in many of the processes constructing masculinities and according to the recent conceptualization of masculinity. Although the construction of male images varies through body features, almost all of them convey the message that men’s identity could be managed through manipulation and by enhancing the appearance. Furthermore, they suggest that men should engage in “bodywork” through advertised products, activities and/or practices, in order to achieve their preferred social image.
Keywords: Body images, content analysis, lifestyle magazines, masculinity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456133 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets
Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira
Abstract:
We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.
Keywords: Finite Volume Methods, Central Schemes, Fortran 90, Relativistic Astrophysics, Jet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324132 Distinctive Features of Legal Relations in the Area of Subsoil Use, Renewal and Protection in Ukraine
Authors: N. Maksimentseva
Abstract:
The issue of public administration in subsoil use, renewal and protection is of high importance for Ukraine since it is strongly linked to energy security of the state as well as it shall facilitate the people of Ukraine to efficiently implement its propitiatory rights towards natural resources and redistribution of national wealth. As it is stipulated in the Article 11 of the Subsoil Code of Ukraine (the Code) the authorities that administer the industry are limited to central executive bodies and local governments. In particular, it is stipulated in the Code that the Ukraine’s Cabinet of Ministers carries out public administration in geological exploration, production and protection of subsoil. Other state bodies of public administration include central public authority responsible for state environmental protection policies; central public authority in charge of implementation of state geological exploration and efficient subsoil use policies; central authority in charge of state health and safety control policies. There are also public authorities in the Autonomous Republic of Crimea; local executive bodies and other state authorities and local self-government authorities in compliance with laws of Ukraine. This article is devoted to the analysis of the legal relations in the area of public administration of subsoil use, renewal and protection in Ukraine. The main approaches to study the essence of legal relations in the named area as well as its tasks, functions and methods are analyzed. It is concluded in this article that legal relationship in the field of public administration of subsoil use, renewal and protection is characterized by specifics of its task (development of natural resources).
Keywords: Legal relations, public administration, Subsoil Code of Ukraine, subsoil use, renewal and protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093131 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach
Authors: Ashly Joseph, Jithu Paulose
Abstract:
In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.
Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87130 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.
Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594129 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.
Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449128 Fake Account Detection in Twitter Based on Minimum Weighted Feature set
Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5837127 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641126 Measuring the Influence of Functional Proximity on Environmental Urban Performance via Integrated Modification Methodology: Four Study Cases in Milan
Authors: M. Tadi, M. Hadi Mohammad Zadeh, Ozge Ogut
Abstract:
Although how cities’ forms are structured is studied, more efforts are needed on systemic comprehensions and evaluations of the urban morphology through quantitative metrics that are able to describe the performance of a city in relation to its formal properties. More research is required in this direction in order to better describe the urban form characteristics and their impact on the environmental performance of cities and to increase their sustainability stewardship. With the aim of developing a better understanding of the built environment’s systemic structure, the intention of this paper is to present a holistic methodology for studying the behavior of the built environment and investigate the methods for measuring the effect of urban structure to the environmental performance. This goal will be pursued through an inquiry into the morphological components of the urban systems and the complex relationships between them. Particularly, this paper focuses on proximity, referring to the proximity of different land-uses, is a concept with which Integrated Modification Methodology (IMM) explains how land-use allocation might affect the choice of mobility in neighborhoods, and especially, encourage or discourage non-motived mobility. This paper uses proximity to demonstrate that the structure attributes can quantifiably relate to the performing behavior in the city. The target is to devise a mathematical pattern from the structural elements and correlate it directly with urban performance indicators concerned with environmental sustainability. The paper presents some results of this rigorous investigation of urban proximity and its correlation with performance indicators in four different areas in the city of Milan, each of them characterized by different morphological features.
Keywords: Built environment, ecology, sustainable indicators, sustainability, urban morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 630125 A Zero-Cost Collar Option Applied to Materials Procurement Contracts to Reduce Price Fluctuation Risks in Construction
Authors: H. L. Yim, S. H. Lee, S. K. Yoo, J. J. Kim
Abstract:
This study proposes a materials procurement contracts model to which the zero-cost collar option is applied for heading price fluctuation risks in construction.The material contract model based on the collar option that consists of the call option striking zone of the construction company(the buyer) following the materials price increase andthe put option striking zone of the material vendor(the supplier) following a materials price decrease. This study first determined the call option strike price Xc of the construction company by a simple approach: it uses the predicted profit at the project starting point and then determines the strike price of put option Xp that has an identical option value, which completes the zero-cost material contract.The analysis results indicate that the cost saving of the construction company increased as Xc decreased. This was because the critical level of the steel materials price increasewas set at a low level. However, as Xc decreased, Xpof a put option that had an identical option value gradually increased. Cost saving increased as Xc decreased. However, as Xp gradually increased, the risk of loss from a construction company increased as the steel materials price decreased. Meanwhile, cost saving did not occur for the construction company, because of volatility. This result originated in the zero-cost features of the two-way contract of the collar option. In the case of the regular one-way option, the transaction cost had to be subtracted from the cost saving. The transaction cost originated from an option value that fluctuated with the volatility. That is, the cost saving of the one-way option was affected by the volatility. Meanwhile, even though the collar option with zero transaction cost cut the connection between volatility and cost saving, there was a risk of exercising the put option.Keywords: Construction materials, Supply chain management, Procurement, Payment, Collar option
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522124 Effect of Exit Annular Area on the Flow Field Characteristics of an Unconfined Premixed Annular Swirl Burner
Authors: Vishnu Raj, Chockalingam Prathap
Abstract:
The objective of this study was to explore the impact of variation in the exit annular area on the local flow field features and the flame stability of an annular premixed swirl burner (unconfined) operated with a premixed n-butane air mixture at an equivalence ratio (Φ) = 1, 1 bar, and 300K. A swirl burner with an axial swirl generator having a swirl number of 1.5 was used. Three different burner heads were chosen to have the exit area increased from 100%, 160%, and 220% resulting in inner and outer diameters and cross-sectional areas as (1) 10 mm & 15 mm, 98 mm2 (2) 17.5 mm & 22.5 mm, 157 mm2 and (3) 25 mm & 30 mm, 216 mm2. The bulk velocity and Reynolds number based on the hydraulic diameter and unburned gas properties were kept constant at 12 m/s and 4000. (i) Planar Particle Image Velocimetry (PIV) with TiO2 seeding particles and (ii) CH* chemiluminescence was used to measure the velocity fields and reaction zones of the swirl flames at 5 Hz, respectively. Velocity fields and the jet spreading rates measured at the isothermal and reactive conditions revealed that the presence of a flame significantly altered the flow field in the radial direction due to the gas expansion. Important observations from the flame measurements were: the height and maximum width of the recirculation bubbles normalized by the hydraulic diameter, and the jet spreading angles for the flames for the three exit area cases were: (a) 4.52, 1.95, 34◦, (b) 6.78, 2.37, 26◦, and (c) 8.73, 2.32, 22◦. The lean blowout (LBO) was also measured, and the respective equivalence ratios were: 0.80, 0.92, and 0.82. LBO was relatively narrow for the 157 mm2 case. For this case, PIV measurements showed that Turbulent Kinetic Energy and turbulent intensity were relatively high compared to the other two cases, resulting in higher stretch rates and narrower LBO.
Keywords: Chemiluminescence, jet spreading rate, lean blow out, swirl flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268123 Numerical Investigation of Nozzle Shape Effect on Shock Wave in Natural Gas Processing
Authors: Esam I. Jassim, Mohamed M. Awad
Abstract:
Natural gas flow contains undesirable solid particles, liquid condensation, and/or oil droplets and requires reliable removing equipment to perform filtration. Recent natural gas processing applications are demanded compactness and reliability of process equipment. Since conventional means are sophisticated in design, poor in efficiency, and continue lacking robust, a supersonic nozzle has been introduced as an alternative means to meet such demands. A 3-D Convergent-Divergent Nozzle is simulated using commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six different shapes of nozzle are numerically examined to illustrate the position of shock-wave as such spot could be considered as a benchmark of particle separation. Rectangle, triangle, circular, elliptical, pentagon, and hexagon nozzles are simulated using Fluent Code with all have same cross-sectional area. The simple one-dimensional inviscid theory does not describe the actual features of fluid flow precisely as it ignores the impact of nozzle configuration on the flow properties. CFD Simulation results, however, show that nozzle geometry influences the flow structures including location of shock wave. The CFD analysis predicts shock appearance when p01/pa>1.2 for almost all geometry and locates at the lower area ratio (Ae/At). Simulation results showed that shock wave in Elliptical nozzle has the farthest distance from the throat among the others at relatively small NPR. As NPR increases, hexagon would be the farthest. The numerical result is compared with available experimental data and has shown good agreement in terms of shock location and flow structure.Keywords: CFD, Particle Separation, Shock wave, Supersonic Nozzle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3250122 Innovative Design Considerations for Adaptive Spacecraft
Authors: K. Parandhama Gowd
Abstract:
Space technologies have changed the way we live in the present day society and manage many aspects of our daily affairs through Remote sensing, Navigation & Communications. Further, defense and military usage of spacecraft has increased tremendously along with civilian purposes. The number of satellites deployed in space in Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and the Geostationary Orbit (GEO) has gone up. The dependency on remote sensing and operational capabilities are most invariably to be exploited more and more in future. Every country is acquiring spacecraft in one way or other for their daily needs, and spacecraft numbers are likely to increase significantly and create spacecraft traffic problems. The aim of this research paper is to propose innovative design concepts for adaptive spacecraft. The main idea here is to improve existing design methods of spacecraft design and development to further improve upon design considerations for futuristic adaptive spacecraft with inbuilt features for automatic adaptability and self-protection. In other words, the innovative design considerations proposed here are to have future spacecraft with self-organizing capabilities for orbital control and protection from anti-satellite weapons (ASAT). Here, an attempt is made to propose design and develop futuristic spacecraft for 2030 and beyond due to tremendous advancements in VVLSI, miniaturization, and nano antenna array technologies, including nano technologies are expected.
Keywords: Satellites, low earth orbit, medium earth orbit, geostationary earth orbit, self-organizing control system, anti-satellite weapons, orbital control, radar warning receiver, missile warning receiver, laser warning receiver, attitude and orbit control systems, command and data handling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 999121 LIDAR Obstacle Warning and Avoidance System for Unmanned Aircraft
Authors: Roberto Sabatini, Alessandro Gardi, Mark A. Richardson
Abstract:
The availability of powerful eye-safe laser sources and the recent advancements in electro-optical and mechanical beam-steering components have allowed laser-based Light Detection and Ranging (LIDAR) to become a promising technology for obstacle warning and avoidance in a variety of manned and unmanned aircraft applications. LIDAR outstanding angular resolution and accuracy characteristics are coupled to its good detection performance in a wide range of incidence angles and weather conditions, providing an ideal obstacle avoidance solution, which is especially attractive in low-level flying platforms such as helicopters and small-to-medium size Unmanned Aircraft (UA). The Laser Obstacle Avoidance Marconi (LOAM) system is one of such systems, which was jointly developed and tested by SELEX-ES and the Italian Air Force Research and Flight Test Centre. The system was originally conceived for military rotorcraft platforms and, in this paper, we briefly review the previous work and discuss in more details some of the key development activities required for integration of LOAM on UA platforms. The main hardware and software design features of this LOAM variant are presented, including a brief description of the system interfaces and sensor characteristics, together with the system performance models and data processing algorithms for obstacle detection, classification and avoidance. In particular, the paper focuses on the algorithm proposed for optimal avoidance trajectory generation in UA applications.
Keywords: LIDAR, Low-Level Flight, Nap-of-the-Earth Flight, Near Infra-Red, Obstacle Avoidance, Obstacle Detection, Obstacle Warning System, Sense and Avoid, Trajectory Optimisation, Unmanned Aircraft.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7085120 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging
Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan
Abstract:
In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493119 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024118 Control of Vibrations in Flexible Smart Structures using Fast Output Sampling Feedback Technique
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
This paper features the modeling and design of a Fast Output Sampling (FOS) Feedback control technique for the Active Vibration Control (AVC) of a smart flexible aluminium cantilever beam for a Single Input Single Output (SISO) case. Controllers are designed for the beam by bonding patches of piezoelectric layer as sensor / actuator to the master structure at different locations along the length of the beam by retaining the first 2 dominant vibratory modes. The entire structure is modeled in state space form using the concept of piezoelectric theory, Euler-Bernoulli beam theory, Finite Element Method (FEM) and the state space techniques by dividing the structure into 3, 4, 5 finite elements, thus giving rise to three types of systems, viz., system 1 (beam divided into 3 finite elements), system 2 (4 finite elements), system 3 (5 finite elements). The effect of placing the sensor / actuator at various locations along the length of the beam for all the 3 types of systems considered is observed and the conclusions are drawn for the best performance and for the smallest magnitude of the control input required to control the vibrations of the beam. Simulations are performed in MATLAB. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the proposed smart system is evaluated for vibration control.Keywords: Smart structure, Finite element method, State spacemodel, Euler-Bernoulli theory, SISO model, Fast output sampling, Vibration control, LMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820117 Crash Severity Modeling in Urban Highways Using Backward Regression Method
Authors: F. Rezaie Moghaddam, T. Rezaie Moghaddam, M. Pasbani Khiavi, M. Ali Ghorbani
Abstract:
Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.Keywords: Backward regression, crash severity, speed, urbanhighways.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921116 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter
Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis
Abstract:
This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.
Keywords: Sliding mode control, pulse-width modulation, hysteresis modulation, DC-DC converter, parallel multi-cells converter, robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778115 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization
Authors: V. H. Mankar, T. S. Das, S. K. Sarkar
Abstract:
In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.
Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067114 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches
Authors: Shilpy Sharma
Abstract:
As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.Keywords: Search engines; machine learning, Informationretrieval, Active logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083113 The Influence of Architectural-Planning Structure of Cities on Their Sustainable Development
Authors: M. Kashiripoor
Abstract:
Existing indicators for sustainable urban development do not identify the features of cities’ planning structures and their architecture. Iranian city has special relevance problem of assessing the conformity of their planning and development of the concept of sustainable development. Based on theoretical sources, the author concludes that, despite the existence of common indicators for sustainable development of settlements, specialized evaluation criteria city structure planning has not been developed. He is trying to fill this gap and put forward a system of indicators characterizing the level of development of the architectural-planning structure of the city. The proposed system of indicators is designed based on technical and economic urban standard indicators from different countries. Alternative designing systems and requirements of modern rating systems like LEED-ND comprise a criterion for evaluation of urban structures in accordance with principles of "Green" building and New Urbanism. Urban development trends are close in spirit of sustainable development and developed under its influence. The study allowed concluding that a system of indicators to identify the relevant architectural-planning structure of the city, requirements of sustainable development, should be adapted to the conditions of each country, particularly in Iran. The article attempts typology proposed indicators, which are presented in tabular form and are divided into two types: planning and spatial. This article discusses the known indicators of sustainable development and proposed specific system of indicators characterizing the level of development of architectural-planning structure of the city. This article examines indicators for evaluating level of city' planning structure development. The proposed system of indicators is derived from the urban planning standards and rating systems such as LEED-ND, BREEAM Community and CASBEE-UD.
Keywords: Architectural-planning structure of cities, urban planning indicators, urban space indicators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093112 The Two Layers of Food Safety and GMOs in the Hungarian Agricultural Law
Authors: Gergely Horváth
Abstract:
The study presents the complexity of food safety dividing it into two layers. Beyond the basic layer of requirements, there is a more demanding higher level linked with quality and purity aspects. It would be important to give special prominence to both layers, given that massive illnesses are caused by foods even though officially licensed. Then the study discusses an exciting safety challenge stemming from the risks of genetically modified organisms (GMOs). Furthermore, it features legal case examples that illustrate how certain liability questions are solved or not yet decided in connection with the production of genetically modified crops. In addition, a special kind of land grabbing, more precisely land grabbing from non-GMO farming systems can also be noticed as well as a new phenomenon eroding food sovereignty. Coexistence, the state where organic, conventional, and GM farming systems are standing alongside each other is an unsuitable experiment that cannot be successful, because of biophysical reasons (such as cross-pollination). Agricultural and environmental lawyers both try to find the optimal solution. Agri-environmental measures are introduced as a special subfield of law maintaining also food safety. The important steps of agri-environmental legislation are aiming at the protection of natural values, the environmental media and strengthening food safety as well, practically the quality of agricultural products intended for human consumption. The major findings of the study focus on searching for the appropriate approach capable of solving the security and safety problems of food production. The most interesting concepts of the Hungarian national and EU food law legislation are analyzed in more detail with descriptive, analytic and comparative methods.
Keywords: Food law, food safety, food security, GMO, agri-environmental measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1221111 Changes of Poultry Meat Chemical Composition, in Relationship with Lighting Schedule
Authors: P. C. Boisteanu, M. G. Usturoi, Roxana Lazar, B. V. Avarvarei
Abstract:
The paper is included within the framework of a complex research program, which was initiated from the hypothesis arguing on the existence of a correlation between pineal indolic and peptide hormones and the somatic development rhythm, including thus the epithalamium-epiphysis complex involvement. At birds, pineal gland contains a circadian oscillator, playing a main role in the temporal organization of the cerebral functions. The secretion of pineal indolic hormones is characterized by a high endogenous rhythmic alternation, modulated by the light/darkness (L/D) succession and by temperature as well. The research has been carried out using 100 chicken broilers - “Ross" commercial hybrid, randomly allocated in two experimental batches: Lc batch, reared under a 12L/12D lighting schedule and Lexp batch, which was photic pinealectomised through continuous exposition to light (150 lux, 24 hours, 56 days). Chemical and physical features of the meat issued from breast fillet and thighs muscles have been studied, determining the dry matter, proteins, fat, collagen, salt content and pH value, as well. Besides the variations of meat chemical composition in relation with lighting schedule, other parameters have been studied: live weight dynamics, feed intake and somatic development degree. The achieved results became significant since chickens have 7 days of age, some variations of the studied parameters being registered, revealing that the pineal gland physiologic activity, in relation with the lighting schedule, could be interpreted through the monitoring of the somatic development technological parameters, usually studied within the chicken broilers rearing aviculture practice.Keywords: lighting schedule, physic-chemical characteristics ofmeat, pineal gland at birds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583