Search results for: inverse category
1185 Monstrous Beauty: Disability and Illness in Contemporary Pop Culture
Authors: Grzegorz Kubinski
Abstract:
In the proposed paper, we would like to present the phenomenon of disease and disability as an element of discourse redefining the contemporary canons of beauty and the category of normativity. In widely understood media, and above all in social media and fashion industry, the use of the disease as an aesthetic category has long been observed. There is an interesting case of promoting and maintaining a certain, ideal pattern of physical beauty, while at the same time very clear exploitation of various types of illnesses. The categories of disease and disabled body are shown as an element of the expression of the individuality and originality of one's own identity, while at the same time the disabled person is still experiencing social exclusion. Illness or body abnormality as an aesthetic category also functions as an ethical-political category. The analysis of the interrelations of these discourses will be presented on the example of selected projects present in social media, like Instagram or Facebook. We would like to present how old forms of 'curiosities' or 'abnormalities' turned into mainstream forms of a new aesthetic. For marginalized disabled people, there is a new form of expression and built their identity. But, there is an interesting point: are this contemporary forms of using disability and illness really new? Or maybe this is just another form of Wunderkammer or even cabinets of curiosities? We propose to analyze contemporary cultural and social context in order to clarify this issue. On the other hand, we would like to present some examples from personal interviews with disabled internet influencers and statements disabled persons concerning the role of the different body in society (e.g. #bodypositive, #perfeclyflawed).Keywords: disability, new media, defect, fashion
Procedia PDF Downloads 1861184 Inverse Prediction of Thermal Parameters of an Annular Hyperbolic Fin Subjected to Thermal Stresses
Authors: Ashis Mallick, Rajeev Ranjan
Abstract:
The closed form solution for thermal stresses in an annular fin with hyperbolic profile is derived using Adomian decomposition method (ADM). The conductive-convective fin with variable thermal conductivity is considered in the analysis. The nonlinear heat transfer equation is efficiently solved by ADM considering insulated convective boundary conditions at the tip of fin. The constant of integration in the solution is to be estimated using minimum decomposition error method. The solution of temperature field is represented in a polynomial form for convenience to use in thermo-elasticity equation. The non-dimensional thermal stress fields are obtained using the ADM solution of temperature field coupled with the thermo-elasticity solution. The influence of the various thermal parameters in temperature field and stress fields are presented. In order to show the accuracy of the ADM solution, the present results are compared with the results available in literature. The stress fields in fin with hyperbolic profile are compared with those of uniform thickness profile. Result shows that hyperbolic fin profile is better choice for enhancing heat transfer. Moreover, less thermal stresses are developed in hyperbolic profile as compared to rectangular profile. Next, Nelder-Mead based simplex search method is employed for the inverse estimation of unknown non-dimensional thermal parameters in a given stress fields. Owing to the correlated nature of the unknowns, the best combinations of the model parameters which are satisfying the predefined stress field are to be estimated. The stress fields calculated using the inverse parameters give a very good agreement with the stress fields obtained from the forward solution. The estimated parameters are suitable to use for efficient and cost effective fin designing.Keywords: Adomian decomposition, inverse analysis, hyperbolic fin, variable thermal conductivity
Procedia PDF Downloads 3261183 Lightweight Concrete Fracture Energy Derived by Inverse Analysis
Authors: Minho Kwon, Seonghyeok Lee, Wooyoung Jung
Abstract:
In recent years, with increase of construction of skyscraper structures, the study of concrete materials to improve their weight and performance has been emerging as a key of research area. Typically, the concrete structures has disadvantage of increasing the weight due to its mass in comparison to the strength of the materials. Therefore, in order to improve such problems, the light-weight aggregate concrete and high strength concrete materials have been studied during the past decades. On the other hand, the study of light-weight aggregate concrete materials has lack of data in comparison to the concrete structure using high strength materials, relatively. Consequently, this study presents the performance characteristics of light-weight aggregate concrete materials due to the material properties and strength. Also, this study conducted the experimental tests with respect to normal and lightweight aggregate materials, in order to indentify the tensile crack failure of the concrete structures. As a result, the Crack Mouth Opening Displacement (CMOD) from the experimental tests was constructed and the fracture energy using inverse problem analysis was developed from the force-CMOD relationship in this study, respectively.Keywords: lightweight aggregate concrete, crack mouth opening displacement, inverse analysis, fracture energy
Procedia PDF Downloads 3551182 Off-Grid Sparse Inverse Synthetic Aperture Imaging by Basis Shift Algorithm
Authors: Mengjun Yang, Zhulin Zong, Jie Gao
Abstract:
In this paper, a new and robust algorithm is proposed to achieve high resolution for inverse synthetic aperture radar (ISAR) imaging in the compressive sensing (CS) framework. Traditional CS based methods have to assume that unknown scatters exactly lie on the pre-divided grids; otherwise, their reconstruction performance dropped significantly. In this processing algorithm, several basis shifts are utilized to achieve the same effect as grid refinement does. The detailed implementation of the basis shift algorithm is presented in this paper. From the simulation we can see that using the basis shift algorithm, imaging precision can be improved. The effectiveness and feasibility of the proposed method are investigated by the simulation results.Keywords: ISAR imaging, sparse reconstruction, off-grid, basis shift
Procedia PDF Downloads 2631181 An Improved Sub-Nyquist Sampling Jamming Method for Deceiving Inverse Synthetic Aperture Radar
Authors: Yanli Qi, Ning Lv, Jing Li
Abstract:
Sub-Nyquist sampling jamming method (SNSJ) is a well known deception jamming method for inverse synthetic aperture radar (ISAR). However, the anti-decoy of the SNSJ method performs easier since the amplitude of the false-target images are weaker than the real-target image; the false-target images always lag behind the real-target image, and all targets are located in the same cross-range. In order to overcome the drawbacks mentioned above, a simple modulation based on SNSJ (M-SNSJ) is presented in this paper. The method first uses amplitude modulation factor to make the amplitude of the false-target images consistent with the real-target image, then uses the down-range modulation factor and cross-range modulation factor to make the false-target images move freely in down-range and cross-range, respectively, thus the capacity of deception is improved. Finally, the simulation results on the six available combinations of three modulation factors are given to illustrate our conclusion.Keywords: inverse synthetic aperture radar (ISAR), deceptive jamming, Sub-Nyquist sampling jamming method (SNSJ), modulation based on Sub-Nyquist sampling jamming method (M-SNSJ)
Procedia PDF Downloads 2141180 Reductive Control in the Management of Redundant Actuation
Authors: Mkhinini Maher, Knani Jilani
Abstract:
We present in this work the performances of a mobile omnidirectional robot through evaluating its management of the redundancy of actuation. Thus we come to the predictive control implemented. The distribution of the wringer on the robot actions, through the inverse pseudo of Moore-Penrose, corresponds to a -geometric- distribution of efforts. We will show that the load on vehicle wheels would not be equi-distributed in terms of wheels configuration and of robot movement. Thus, the threshold of sliding is not the same for the three wheels of the vehicle. We suggest exploiting the redundancy of actuation to reduce the risk of wheels sliding and to ameliorate, thereby, its accuracy of displacement. This kind of approach was the subject of study for the legged robots.Keywords: mobile robot, actuation, redundancy, omnidirectional, inverse pseudo moore-penrose, reductive control
Procedia PDF Downloads 5081179 A Methodology for Automatic Diversification of Document Categories
Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim
Abstract:
Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.Keywords: big data analysis, document classification, multi-category, text mining, topic analysis
Procedia PDF Downloads 2711178 Application of Unconventional Materials for ‘Statement Jewellery’
Authors: Shaleni Bajpai, V. Niveditha
Abstract:
A fashion accessory is a product which used to give secondary way to the wearer’s outfit. The term came into use in the 19th century and was specifically chosen to complement the wearer’s look. The aim of project was to introduce the unconventional materials for statement jewellery. The materials used for statement jewellery were waste Cd’s, and scrap fabric. These materials were amalgamated with the traditional raw materials such as beads, sequins, charms and chains to form unique jewellery sets. The sets were divided into two categories based on the type of raw material used i.e. Category 1: Clef-Cd Jewellery, Category 2: Crumb-Fabric Jewellery. Each Jewellery set consisted of a necklace, a pair of earrings, a ring and a bracelet.Keywords: statement jewellery, unconventional, crumb fabric, Cd’s
Procedia PDF Downloads 2561177 A Closed Loop Audit of Pre-operative Transfusion Samples in Orthopaedic Patients at a Major Trauma Centre
Authors: Tony Feng, Rea Thomson, Kathryn Greenslade, Ross Medine, Jennifer Easterbrook, Calum Arthur, Matilda Powell-bowns
Abstract:
There are clear guidelines on taking group and screen samples (G&S) for elective arthroplasty and major trauma. However, there is limited guidance on blood grouping for other trauma patients. The purpose of this study was to review the level of blood grouping at a major trauma centre and validate a protocol that limits the expensive processing of G&S samples. After reviewing the national guidance on transfusion samples in orthopaedic patients, data was prospectively collected for all orthopaedic admissions in the Royal Infirmary of Edinburgh between January to February 2023. The cause of admission, number of G&S samples processed on arrival and need for red cells was collected using the hospital blood bank. A new protocol was devised based on a multidisciplinary meeting which limited the requirement for G&S samples only to presentations in “category X”, including neck-of-femur fractures (NOFs), pelvic fractures and major trauma. A re-audit was completed between April and May after departmental education and institution of this protocol. 759 patients were admitted under orthopaedics in the major trauma centre across two separate months. 47% of patients were admitted with presentations falling in category X (354/759) and patients in this category accounted for 88% (92/104) of those requiring post-operative red cell transfusions. Of these, 51% were attributed to NOFs (47/92). In the initial audit, 50% of trauma patients outwith category X had samples sent (116/230), estimated to cost £3800. Of these 230 patients, 3% required post-operative transfusions (7/230). In the re-audit, 23% of patients outwith category X had samples sent (40/173), estimated to cost £1400, of which 3% (5/173) required transfusions. None of the transfusions in these patients in either audit were related to their operation and the protocol achieved an estimated cost saving of £2400 over one month. This study highlights the importance of sending samples for patients with certain categories of orthopaedic trauma (category X) due to the high demand for post-operative transfusions. However, the absence of transfusion requirements in other presentations suggests over-testing. While implementation of the new protocol has markedly reduced over-testing, additional interventions are required to reduce this further.Keywords: blood transfusion, quality improvement, orthopaedics, trauma
Procedia PDF Downloads 751176 SEM Image Classification Using CNN Architectures
Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope
Procedia PDF Downloads 1241175 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 2961174 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 2751173 Evaluation of Merger Premium and Firm Performance in Europe
Authors: Matthias Nnadi
Abstract:
This paper investigates the relationship between premiums and returns in the short and long terms in European merger and acquisition (M&A) deals. The study employs Calendar Time Portfolio (CTP) model and find strong evidence that in the long run, premiums have a positive impact on performance, and we also establish evidence of a significant difference between the abnormal returns of the high premium paying portfolio and the low premium paying ones. Even in cases where all sub-portfolios show negative abnormal returns, the high premium category still outperforms the low premium category. Our findings have implications for companies engaging in acquisitions.Keywords: mergers, premium, performance, returns, acquisitions
Procedia PDF Downloads 2761172 Nonlinear Adaptive PID Control for a Semi-Batch Reactor Based on an RBF Network
Authors: Magdi. M. Nabi, Ding-Li Yu
Abstract:
Control of a semi-batch polymerization reactor using an adaptive radial basis function (RBF) neural network method is investigated in this paper. A neural network inverse model is used to estimate the valve position of the reactor; this method can identify the controlled system with the RBF neural network identifier. The weights of the adaptive PID controller are timely adjusted based on the identification of the plant and self-learning capability of RBFNN. A PID controller is used in the feedback control to regulate the actual temperature by compensating the neural network inverse model output. Simulation results show that the proposed control has strong adaptability, robustness and satisfactory control performance and the nonlinear system is achieved.Keywords: Chylla-Haase polymerization reactor, RBF neural networks, feed-forward, feedback control
Procedia PDF Downloads 7011171 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems
Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang
Abstract:
Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel
Procedia PDF Downloads 921170 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy
Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko
Abstract:
In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy
Procedia PDF Downloads 5271169 Amelioration of Stability and Rheological Properties of a Crude Oil-Based Drilling Mud
Authors: Hammadi Larbi, Bergane Cheikh
Abstract:
Drilling for oil is done through many mechanisms. The goal is first to dig deep and then, after arriving at the oil source, to simply suck it up. And for this, it is important to know the role of oil-based drilling muds, which had many benefits for the drilling tool and for drilling generally, and also and essentially to know the rheological behavior of the emulsion system in particular water-in-oil inverse emulsions (Water/crude oil). This work contributes to the improvement of the stability and rheological properties of crude oil-based drilling mud by organophilic clay. Experimental data from steady-state flow measurements of crude oil-based drilling mud are classically analyzed by the Herschel-Bulkley model. The effects of organophilic clay type VG69 are studied. Microscopic observation showed that the addition of quantities of organophilic clay type VG69 less than or equal to 3 g leads to the stability of inverse Water/Oil emulsions; on the other hand, for quantities greater than 3g, the emulsions are destabilized.Keywords: drilling, organophilic clay, crude oil, stability
Procedia PDF Downloads 1231168 Modeling of System Availability and Bayesian Analysis of Bivariate Distribution
Authors: Muhammad Farooq, Ahtasham Gul
Abstract:
To meet the desired standard, it is important to monitor and analyze different engineering processes to get desired output. The bivariate distributions got a lot of attention in recent years to describe the randomness of natural as well as artificial mechanisms. In this article, a bivariate model is constructed using two independent models developed by the nesting approach to study the effect of each component on reliability for better understanding. Further, the Bayes analysis of system availability is studied by considering prior parametric variations in the failure time and repair time distributions. Basic statistical characteristics of marginal distribution, like mean median and quantile function, are discussed. We use inverse Gamma prior to study its frequentist properties by conducting Monte Carlo Markov Chain (MCMC) sampling scheme.Keywords: reliability, system availability Weibull, inverse Lomax, Monte Carlo Markov Chain, Bayesian
Procedia PDF Downloads 701167 Picture of the World by the Second Law of Thermodynamic
Authors: Igor V. Kuzminov
Abstract:
According to its content, the proposed article is a collection of articles with comments and additions. All articles, in one way or another, have a connection with the Second Law of Thermodynamics. The content of the articles is given in a concise form. The articles were published in different journals at different times. Main topics are presented: gravity, biography of the Earth, physics of global warming-cooling cycles, multiverse. The articles are based on the laws of classical physics. Along the way, it should be noted that the Second Law of thermodynamics can be formulated as the Law of Matter Cooling. As it cools down, the processes of condensation, separation, and changes in the aggregate states of matter occur. In accordance with these changes, a picture of the world is being formed. Also, the main driving force of these processes is the inverse temperature dependence of the forces of gravity. As matter cools, the forces of gravity increase. The actions of these phenomena in the compartment form a picture of the world.Keywords: gravitational forces, cooling of matter, inverse temperature dependence of gravitational forces, planetary model of the atom
Procedia PDF Downloads 2421166 Neural Correlates of Arabic Digits Naming
Authors: Fernando Ojedo, Alejandro Alvarez, Pedro Macizo
Abstract:
In the present study, we explored electrophysiological correlates of Arabic digits naming to determine semantic processing of numbers. Participants named Arabic digits grouped by category or intermixed with exemplars of other semantic categories while the N400 event-related potential was examined. Around 350-450 ms after the presentation of Arabic digits, brain waves were more positive in anterior regions and more negative in posterior regions when stimuli were grouped by category relative to the mixed condition. Contrary to what was found in other studies, electrophysiological results suggested that the production of numerals involved semantic mediation.Keywords: Arabic digit naming, event-related potentials, semantic processing, number production
Procedia PDF Downloads 5811165 Semantic Data Schema Recognition
Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia
Abstract:
The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns
Procedia PDF Downloads 4161164 Simulation of a Three-Link, Six-Muscle Musculoskeletal Arm Activated by Hill Muscle Model
Authors: Nafiseh Ebrahimi, Amir Jafari
Abstract:
The study of humanoid character is of great interest to researchers in the field of robotics and biomechanics. One might want to know the forces and torques required to move a limb from an initial position to the desired destination position. Inverse dynamics is a helpful method to compute the force and torques for an articulated body limb. It enables us to know the joint torques required to rotate a link between two positions. Our goal in this study was to control a human-like articulated manipulator for a specific task of path tracking. For this purpose, the human arm was modeled with a three-link planar manipulator activated by Hill muscle model. Applying a proportional controller, values of force and torques applied to the joints were calculated by inverse dynamics, and then joints and muscle forces trajectories were computed and presented. To be more accurate to say, the kinematics of the muscle-joint space was formulated by which we defined the relationship between the muscle lengths and the geometry of the links and joints. Secondary, the kinematic of the links was introduced to calculate the position of the end-effector in terms of geometry. Then, we considered the modeling of Hill muscle dynamics, and after calculation of joint torques, finally, we applied them to the dynamics of the three-link manipulator obtained from the inverse dynamics to calculate the joint states, find and control the location of manipulator’s end-effector. The results show that the human arm model was successfully controlled to take the designated path of an ellipse precisely.Keywords: arm manipulator, hill muscle model, six-muscle model, three-link lodel
Procedia PDF Downloads 1411163 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 1261162 Study of Storms on the Javits Center Green Roof
Authors: Alexander Cho, Harsho Sanyal, Joseph Cataldo
Abstract:
A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.Keywords: green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter
Procedia PDF Downloads 1131161 Technological Development and Implementation of a Robotic Arm Motioned by Programmable Logic Controller
Authors: J. G. Batista, L. J. de Bessa Neto, M. A. F. B. Lima, J. R. Leite, J. I. de Andrade Nunes
Abstract:
The robot manipulator is an equipment that stands out for two reasons: Firstly because of its characteristics of movement and reprogramming, resembling the arm; secondly, by adding several areas of knowledge of science and engineering. The present work shows the development of the prototype of a robotic manipulator driven by a Programmable Logic Controller (PLC), having two degrees of freedom, which allows the movement and displacement of mechanical parts, tools, and objects in general of small size, through an electronic system. The aim is to study direct and inverse kinematics of the robotic manipulator to describe the translation and rotation between two adjacent links of the robot through the Denavit-Hartenberg parameters. Currently, due to the many resources that microcomputer systems offer us, robotics is going through a period of continuous growth that will allow, in a short time, the development of intelligent robots with the capacity to perform operations that require flexibility, speed and precision.Keywords: Denavit-Hartenberg, direct and inverse kinematics, microcontrollers, robotic manipulator
Procedia PDF Downloads 3441160 Surface Sediment Quality Assessment in a Coastal Lagoon (NW Adriatic Sea) Based on SEM-AVS Analysis
Authors: Roberta Guerra, Juan Pablo Pozo Hernandez
Abstract:
Surface sediments from the coastal lagoon of Pialassa Piomboni in the NW Adriatic Sea were collected and analysed and the potential ecological risks in the area were assessed based on the acid-volatile sulphide (AVS) model. The AVS levels are between 0.03 and 8.8 µmol g-1, with the average at 3.1 µmol g-1. The simultaneously extracted metals (∑SEM), which is the molar sum of Cd, Cu, Ni, Pb, and Zn, range from 0.3 to 6.6 µmol g-1, with the average at 1.7 µmol g-1. Most of the high ∑SEM concentrations are located in the southern area of the lagoon. [SEM]Zn had the comparatively high mean concentration (1.4 µmol g-1), and a maximum value of 6.1 µmol g-1, respectively. Concentrations of [SEM]Cd, [SEM]Cu, [SEM]Ni, and [SEM]Pb were consistently lower, with maximum values of 0.007 µmol g-1, 1.4 µmol g-1, 0.3 µmol g-1 and 0.2 µmol g-1, respectively. Compared to other metals, [SEM]Zn was the dominant component in all samples and accounted for approximately 31 - 93% of the ∑SEM, whereas the contribution of Cd – the most toxic metal studied – to ∑SEM was no more than 1%. According to the USEPA evaluation method, the sediment samples can be divided into the three following categories: category 1, adverse biological effects on aquatic life may be expected when ([SEM]–[AVS])/fOC > 3000; category 2, adverse effects on aquatic life are uncertain when ([SEM]–[AVS])/fOC = 130 to 3,000; and category 3, no indication of adverse effects when ([SEM]–[AVS])/fOC < 130. Most of the surface sediments of the Pialassa Piomboni lagoon (>90%) had no adverse biological effects according to the criterion proposed by the USEPA; while adverse effects were uncertain in few stations (~2%).Keywords: sediment quality, heavy metals, coastal lagoon, bioavailability, SEM, AVS
Procedia PDF Downloads 4031159 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 1301158 Objects Tracking in Catadioptric Images Using Spherical Snake
Authors: Khald Anisse, Amina Radgui, Mohammed Rziza
Abstract:
Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image.Keywords: computer vision, spherical snake, omnidirectional image, object tracking, inverse stereographic projection
Procedia PDF Downloads 3981157 Soil Parameters Identification around PMT Test by Inverse Analysis
Authors: I. Toumi, Y. Abed, A. Bouafia
Abstract:
This paper presents a methodology for identifying the cohesive soil parameters that takes into account different constitutive equations. The procedure, applied to identify the parameters of generalized Prager model associated to the Drucker & Prager failure criterion from a pressuremeter expansion curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the simulated curve using a simplex algorithm. The model response on pressuremeter path and its identification from experimental data lead to the determination of the friction angle, the cohesion and the Young modulus. Some parameters effects on the simulated curves and stresses path around pressuremeter probe are presented. Comparisons between the parameters determined with the proposed method and those obtained by other means are also presented.Keywords: cohesive soils, cavity expansion, pressuremeter test, finite element method, optimization procedure, simplex algorithm
Procedia PDF Downloads 2911156 Lane Detection Using Labeling Based RANSAC Algorithm
Authors: Yeongyu Choi, Ju H. Park, Ho-Youl Jung
Abstract:
In this paper, we propose labeling based RANSAC algorithm for lane detection. Advanced driver assistance systems (ADAS) have been widely researched to avoid unexpected accidents. Lane detection is a necessary system to assist keeping lane and lane departure prevention. The proposed vision based lane detection method applies Canny edge detection, inverse perspective mapping (IPM), K-means algorithm, mathematical morphology operations and 8 connected-component labeling. Next, random samples are selected from each labeling region for RANSAC. The sampling method selects the points of lane with a high probability. Finally, lane parameters of straight line or curve equations are estimated. Through the simulations tested on video recorded at daytime and nighttime, we show that the proposed method has better performance than the existing RANSAC algorithm in various environments.Keywords: Canny edge detection, k-means algorithm, RANSAC, inverse perspective mapping
Procedia PDF Downloads 241