Search results for: automatic target recognition (ATR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4895

Search results for: automatic target recognition (ATR)

4475 Designing State Feedback Multi-Target Controllers by the Use of Particle Swarm Optimization Algorithm

Authors: Seyedmahdi Mousavihashemi

Abstract:

One of the most important subjects of interest in researches is 'improving' which result in various algorithms. In so many geometrical problems we are faced with target functions which should be optimized. In group practices, all the functions’ cooperation lead to convergence. In the study, the optimization algorithm of dense particles is used. Usage of the algorithm improves the given performance norms. The results reveal that usage of swarm algorithm for reinforced particles in designing state feedback improves the given performance norm and in optimized designing of multi-target state feedback controlling, the network will maintain its bearing structure. The results also show that PSO is usable for optimization of state feedback controllers.

Keywords: multi-objective, enhanced, feedback, optimization, algorithm, particle, design

Procedia PDF Downloads 471
4474 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 122
4473 Evidences for Better Recall with Compatible Items in Episodic Memory

Authors: X. Laurent, M. A. Estevez, P. Mari-Beffa

Abstract:

A focus of recent research is to understand the role of our own response goals in the selection of information that will be encoded in episodic memory. For example, if we respond to a target in the presence of distractors, an important aspect under study is whether the distractor and the target share a common response (compatible) or not (incompatible). Some studies have found that compatible objects tend to be groups together and stored in episodic memory, whereas others found that targets in the presence of incompatible distractors are remembered better. Our current research seems to support both views. We used a Tulving-based definition of episodic memory to differentiate memory from episodic and non-episodic traces. In this task, participants first had to classify a blue object as human or animal (target) which appeared in the presence of a green one (distractor) that could belong to the same category of the target (compatible), to the opposite (incompatible) or to an irrelevant one (neutral). Later they had to report the identity (What), location (Where) and time (When) of both target objects (which had been previously responded to) and distractors (which had been ignored). Episodic memory was inferred when the three scene properties (identity, location and time) were correct. The measure of non-episodic memory consisted of those trials in which the identity was correctly remembered, but not the location or time. Our results showed that episodic memory for compatible stimuli is significantly superior to incompatible ones. In sharp contrast, non-episodic measures found superior memory for targets in the presence of incompatible distractors. Our results demonstrate that response compatibility affects the encoding of episodic and non-episodic memory traces in different ways.

Keywords: episodic memory, action systems, compatible response, what-where-when task

Procedia PDF Downloads 148
4472 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 253
4471 Segmentation of Liver Using Random Forest Classifier

Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir

Abstract:

Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.

Keywords: CT images, image validation, random forest, segmentation

Procedia PDF Downloads 287
4470 A Biophysical Model of CRISPR/Cas9 on- and off-Target Binding for Rational Design of Guide RNAs

Authors: Iman Farasat, Howard M. Salis

Abstract:

The CRISPR/Cas9 system has revolutionized genome engineering by enabling site-directed and high-throughput genome editing, genome insertion, and gene knockdowns in several species, including bacteria, yeast, flies, worms, and human cell lines. This technology has the potential to enable human gene therapy to treat genetic diseases and cancer at the molecular level; however, the current CRISPR/Cas9 system suffers from seemingly sporadic off-target genome mutagenesis that prevents its use in gene therapy. A comprehensive mechanistic model that explains how the CRISPR/Cas9 functions would enable the rational design of the guide-RNAs responsible for target site selection while minimizing unexpected genome mutagenesis. Here, we present the first quantitative model of the CRISPR/Cas9 genome mutagenesis system that predicts how guide-RNA sequences (crRNAs) control target site selection and cleavage activity. We used statistical thermodynamics and law of mass action to develop a five-step biophysical model of cas9 cleavage, and examined it in vivo and in vitro. To predict a crRNA's binding specificities and cleavage rates, we then compiled a nearest neighbor (NN) energy model that accounts for all possible base pairings and mismatches between the crRNA and the possible genomic DNA sites. These calculations correctly predicted crRNA specificity across 5518 sites. Our analysis reveals that cas9 activity and specificity are anti-correlated, and, the trade-off between them is the determining factor in performing an RNA-mediated cleavage with minimal off-targets. To find an optimal solution, we first created a scheme of safe-design criteria for Cas9 target selection by systematic analysis of available high throughput measurements. We then used our biophysical model to determine the optimal Cas9 expression levels and timing that maximizes on-target cleavage and minimizes off-target activity. We successfully applied this approach in bacterial and mammalian cell lines to reduce off-target activity to near background mutagenesis level while maintaining high on-target cleavage rate.

Keywords: biophysical model, CRISPR, Cas9, genome editing

Procedia PDF Downloads 381
4469 Humanitarian Emergency of the Refugee Condition for Central American Immigrants in Irregular Situation

Authors: María de los Ángeles Cerda González, Itzel Arriaga Hurtado, Pascacio José Martínez Pichardo

Abstract:

In México, the recognition of refugee condition is a fundamental right which, as host State, has the obligation of respect, protect, and fulfill to the foreigners – where we can find the figure of immigrants in irregular situation-, that cannot return to their country of origin for humanitarian reasons. The recognition of the refugee condition as a fundamental right in the Mexican law system proceeds under these situations: 1. The immigrant applies for the refugee condition, even without the necessary proving elements to accredit the humanitarian character of his departure from his country of origin. 2. The immigrant does not apply for the recognition of refugee because he does not know he has the right to, even if he has the profile to apply for. 3. The immigrant who applies fulfills the requirements of the administrative procedure and has access to the refugee recognition. Of the three situations above, only the last one is contemplated for the national indexes of the status refugee; and the first two prove the inefficiency of the governmental system viewed from its lack of sensibility consequence of the no education in human rights matter and which results in the legal vulnerability of the immigrants in irregular situation because they do not have access to the procuration and administration of justice. In the aim of determining the causes and consequences of the no recognition of the refugee status, this investigation was structured from a systemic analysis which objective is to show the advances in Central American humanitarian emergency investigation, the Mexican States actions to protect, respect and fulfil the fundamental right of refugee of immigrants in irregular situation and the social and legal vulnerabilities suffered by Central Americans in Mexico. Therefore, to achieve the deduction of the legal nature of the humanitarian emergency from the Human Rights as a branch of the International Public Law, a conceptual framework is structured using the inductive deductive method. The problem statement is made from a legal framework to approach a theoretical scheme under the theory of social systems, from the analysis of the lack of communication of the governmental and normative subsystems of the Mexican legal system relative to the process undertaken by the Central American immigrants to achieve the recognition of the refugee status as a human right. Accordingly, is determined that fulfilling the obligations of the State referent to grant the right of the recognition of the refugee condition, would mean a guideline for a new stage in Mexican Law, because it would enlarge the constitutional benefits to everyone whose right to the recognition of refugee has been denied an as consequence, a great advance in human rights matter would be achieved.

Keywords: central American immigrants in irregular situation, humanitarian emergency, human rights, refugee

Procedia PDF Downloads 267
4468 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network

Authors: Harshit Mittal, Neeraj Garg

Abstract:

Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.

Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network

Procedia PDF Downloads 37
4467 Towards a Systematic Evaluation of Web Design

Authors: Ivayla Trifonova, Naoum Jamous, Holger Schrödl

Abstract:

A good web design is a prerequisite for a successful business nowadays, especially since the internet is the most common way for people to inform themselves. Web design includes the optical composition, the structure, and the user guidance of websites. The importance of each website leads to the question if there is a way to measure its usefulness. The aim of this paper is to suggest a methodology for the evaluation of web design. The desired outcome is to have an evaluation that is concentrated on a specific website and its target group.

Keywords: evaluation methodology, factor analysis, target group, web design

Procedia PDF Downloads 606
4466 Multimodal Database of Emotional Speech, Video and Gestures

Authors: Tomasz Sapiński, Dorota Kamińska, Adam Pelikant, Egils Avots, Cagri Ozcinar, Gholamreza Anbarjafari

Abstract:

People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. In this article, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories. To check the quality of performance, all recordings are evaluated by experts and volunteers. The database is available to academic community and might be useful in the study on audio-visual emotion recognition.

Keywords: body movement, emotion recognition, emotional corpus, facial expressions, gestures, multimodal database, speech

Procedia PDF Downloads 328
4465 The Challenges and Opportunities Faced by Women in Geomatics Engineering: The Case of the SADC Region

Authors: Moreblessings Shoko

Abstract:

Polymersomes are materials which are considered as artificial counterparts of natural vesicles. The nanotechnology of such smart nanovesicles is very useful to enhance the efficiency of many therapeutic and diagnostic drugs. Those compounds show a higher stability, flexibility, and mechanical strength to the membrane compared to natural liposomes. Also, they can be designed in detail, the permeability of the membrane can be controlled by different stimuli, and the surface can be functionalized with different biological molecules to facilitate monitoring and target. For this purpose, this study demonstrates the formation of multifunctional and pH sensitive polymersomes and their functionalization with different reactive groups or biomolecules inside and outside of polymersomes´ membrane providing by crossing the membrane and docking/undocking processes for biomedical applications. Overall, they are highly versatile and thus present new opportunities for the design of targeted and selective recognition systems, for example, in mimicking cell functions and in synthetic biology.

Keywords: women, geomatics, challenges, capacity building

Procedia PDF Downloads 544
4464 Cross Attention Fusion for Dual-Stream Speech Emotion Recognition

Authors: Shaode Yu, Jiajian Meng, Bing Zhu, Hang Yu, Qiurui Sun

Abstract:

Speech emotion recognition (SER) is for recognizing human subjective emotions through audio data in-depth analysis. From speech audios, how to comprehensively extract emotional information and how to effectively fuse extracted features remain challenging. This paper presents a dual-stream SER framework that embraces both full training and transfer learning of different networks for thorough feature encoding. Besides, a plug-and-play cross-attention fusion (CAF) module is implemented for the valid integration of the dual-stream encoder output. The effectiveness of the proposed CAF module is compared to the other three fusion modules (feature summation, feature concatenation, and feature-wise linear modulation) on two databases (RAVDESS and IEMO-CAP) using different dual-stream encoders (full training network, DPCNN or TextRCNN; transfer learning network, HuBERT or Wav2Vec2). Experimental results suggest that the CAF module can effectively reconcile conflicts between features from different encoders and outperform the other three feature fusion modules on the SER task. In the future, the plug-and-play CAF module can be extended for multi-branch feature fusion, and the dual-stream SER framework can be widened for multi-stream data representation to improve the recognition performance and generalization capacity.

Keywords: speech emotion recognition, cross-attention fusion, dual-stream, pre-trained

Procedia PDF Downloads 44
4463 A Study on Design for Parallel Test Based on Embedded System

Authors: Zheng Sun, Weiwei Cui, Xiaodong Ma, Hongxin Jin, Dongpao Hong, Jinsong Yang, Jingyi Sun

Abstract:

With the improvement of the performance and complexity of modern equipment, automatic test system (ATS) becomes widely used for condition monitoring and fault diagnosis. However, the conventional ATS mainly works in a serial mode, and lacks the ability of testing several equipments at the same time. That leads to low test efficiency and ATS redundancy. Especially for a large majority of equipment under test, the conventional ATS cannot meet the requirement of efficient testing. To reduce the support resource and increase test efficiency, we propose a method of design for the parallel test based on the embedded system in this paper. Firstly, we put forward the general framework of the parallel test system, and the system contains a central management system (CMS) and several distributed test subsystems (DTS). Then we give a detailed design of the system. For the hardware of the system, we use embedded architecture to design DTS. For the software of the system, we use test program set to improve the test adaption. By deploying the parallel test system, the time to test five devices is now equal to the time to test one device in the past. Compared with the conventional test system, the proposed test system reduces the size and improves testing efficiency. This is of great significance for equipment to be put into operation swiftly. Finally, we take an industrial control system as an example to verify the effectiveness of the proposed method. The result shows that the method is reasonable, and the efficiency is improved up to 500%.

Keywords: parallel test, embedded system, automatic test system, automatic test system (ATS), central management system, central management system (CMS), distributed test subsystems, distributed test subsystems (DTS)

Procedia PDF Downloads 270
4462 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 346
4461 Auction Theory In Competitive Takeovers: Ideas For Regulators

Authors: Emanuele Peggi

Abstract:

The regulation of competitive takeover bids is one of the most problematic issues of any legislation on takeovers since it concerns a particular type of market, that of corporate control, whose peculiar characteristic is that companies represent "assets" unique of their kind, for each of which there will be a relevant market characterized by the presence of different subjects interested in acquiring control. Firstly, this work aims to analyze, from a comparative point of view, the regulation of takeover bids in competitive scenarios, characterized by the presence of multiple takeover bids for the same target company, and contribute to the debate on the impact that various solutions adopted in some legal systems examined (Italy, UK, and USA) have had on the efficiency of the market for corporate control. Secondly, the different auction models identified by the economic literature and their possible applications to corporate acquisitions in competitive scenarios will be examined, as well as the consequences that the application of each of them causes on the efficiency of the market for corporate control and the interests of the target shareholders. The scope is to study the possibility of attributing to the management of the target company the power to design the auction in order to better protect the interests of shareholders through the adoption of ad hoc models according to the specific context. and in particular on the ground of their assessment of the buyer's risk profile.

Keywords: takeovers, auction theory, shareholders, target company

Procedia PDF Downloads 154
4460 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar

Abstract:

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Keywords: agricultural mobile robot, image processing, path recognition, hough transform

Procedia PDF Downloads 123
4459 Distribution of Traffic Volume at Fuel Station during Peak Hour Period on Arterial Road

Authors: Surachai Ampawasuvan, Supornchai Utainarumol

Abstract:

Most of fuel station’ customers, who drive on the major arterial road wants to use the stations to fill fuel to their vehicle during their journey to destinations. According to the survey of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, or questionnaires, it was found that most users prefer to use fuel stations on holiday rather than on working day. They also prefer to use fuel stations in the morning rather than in the evening. When comparing the ratio of the distribution pattern of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, there is no significant difference. However, when comparing the ratio of peak hour (peak hour rate) of the results from questionnaires at 13 to 14 percent with the results obtained by using the methods of the Institute of Transportation Engineering (ITE), it is found that the value is similar. However, it is different from a survey by video camera and automatic traffic counting at 6 to 7 percent of about half. So, this study suggests that in order to forecast trip generation of vehicle using fuel stations on major arterial road which is mostly characterized by Though Traffic, it is recommended to use the value of half of peak hour rate, which would make the forecast for trips generation to be more precise and accurate and compatible to surrounding environment.

Keywords: peak rate, trips generation, fuel station, arterial road

Procedia PDF Downloads 376
4458 Hounsfield-Based Automatic Evaluation of Volumetric Breast Density on Radiotherapy CT-Scans

Authors: E. M. D. Akuoko, Eliana Vasquez Osorio, Marcel Van Herk, Marianne Aznar

Abstract:

Radiotherapy is an integral part of treatment for many patients with breast cancer. However, side effects can occur, e.g., fibrosis or erythema. If patients at higher risks of radiation-induced side effects could be identified before treatment, they could be given more individual information about the risks and benefits of radiotherapy. We hypothesize that breast density is correlated with the risk of side effects and present a novel method for automatic evaluation based on radiotherapy planning CT scans. Methods: 799 supine CT scans of breast radiotherapy patients were available from the REQUITE dataset. The methodology was first established in a subset of 114 patients (cohort 1) before being applied to the whole dataset (cohort 2). All patients were scanned in the supine position, with arms up, and the treated breast (ipsilateral) was identified. Manual experts contour available in 96 patients for both the ipsilateral and contralateral breast in cohort 1. Breast tissue was segmented using atlas-based automatic contouring software, ADMIRE® v3.4 (Elekta AB, Sweden). Once validated, the automatic segmentation method was applied to cohort 2. Breast density was then investigated by thresholding voxels within the contours, using Otsu threshold and pixel intensity ranges based on Hounsfield units (-200 to -100 for fatty tissue, and -99 to +100 for fibro-glandular tissue). Volumetric breast density (VBD) was defined as the volume of fibro-glandular tissue / (volume of fibro-glandular tissue + volume of fatty tissue). A sensitivity analysis was performed to verify whether calculated VBD was affected by the choice of breast contour. In addition, we investigated the correlation between volumetric breast density (VBD) and patient age and breast size. VBD values were compared between ipsilateral and contralateral breast contours. Results: Estimated VBD values were 0.40 (range 0.17-0.91) in cohort 1, and 0.43 (0.096-0.99) in cohort 2. We observed ipsilateral breasts to be denser than contralateral breasts. Breast density was negatively associated with breast volume (Spearman: R=-0.5, p-value < 2.2e-16) and age (Spearman: R=-0.24, p-value = 4.6e-10). Conclusion: VBD estimates could be obtained automatically on a large CT dataset. Patients’ age or breast volume may not be the only variables that explain breast density. Future work will focus on assessing the usefulness of VBD as a predictive variable for radiation-induced side effects.

Keywords: breast cancer, automatic image segmentation, radiotherapy, big data, breast density, medical imaging

Procedia PDF Downloads 110
4457 'Value-Based Re-Framing' in Identity-Based Conflicts: A Skill for Mediators in Multi-Cultural Societies

Authors: Hami-Ziniman Revital, Ashwall Rachelly

Abstract:

The conflict resolution realm has developed tremendously during the last half-decade. Three main approaches should be mentioned: an Alternative Dispute Resolution (ADR) suggesting processes such as Arbitration or Interests-based Negotiation was developed as an answer to obligations and rights-based conflicts. The Pragmatic mediation approach focuses on the gap between interests and needs of disputants. The Transformative mediation approach focusses on relations and suits identity-based conflicts. In the current study, we examine the conflictual relations between religious and non-religious Jews in Israel and the impact of three transformative mechanisms: Inter-group recognition, In-group empowerment and Value-based reframing on the relations between the participants. The research was conducted during four facilitated joint mediation classes. A unique finding was found. Using both transformative mechanisms and the Contact Hypothesis criteria, we identify transformation in participants’ relations and a considerable change from anger, alienation, and suspiciousness to an increased understanding, affection and interpersonal concern towards the out-group members. Intergroup Recognition, In-group empowerment, and Values-based reframing were the skills discovered as the main enablers of the change in the relations and the research participants’ fostered mutual recognition of the out-group values and identity-based issues. We conclude this transformation was possible due to a constant intergroup contact, based on the Contact Hypothesis criteria. In addition, as Interests-based mediation uses “Reframing” as a skill to acknowledge both mutual and opposite needs of the disputants, we suggest the use of “Value-based Reframing” in intergroup identity-based conflicts, as a skill contributes to the empowerment and the recognition of both mutual and different out-group values. We offer to implement those insights and skills to assist conflict resolution facilitators in various intergroup identity-based conflicts resolution efforts and to establish further research and knowledge.

Keywords: empowerment, identity-based conflict, intergroup recognition, intergroup relations, mediation skills, multi-cultural society, reframing, value-based recognition

Procedia PDF Downloads 317
4456 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy

Authors: Hao Wang, Shengchun Wang, Weidong Wang

Abstract:

On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.

Keywords: curvature entropy, profile registration, rail wear, structured light, train-running

Procedia PDF Downloads 236
4455 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning

Authors: Ahcene Habbi, Yassine Boudouaoui

Abstract:

This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.

Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization

Procedia PDF Downloads 412
4454 Facial Recognition Technology in Institutions of Higher Learning: Exploring the Use in Kenya

Authors: Samuel Mwangi, Josephine K. Mule

Abstract:

Access control as a security technique regulates who or what can access resources. It is a fundamental concept in security that minimizes risks to the institutions that use access control. Regulating access to institutions of higher learning is key to ensure only authorized personnel and students are allowed into the institutions. The use of biometrics has been criticized due to the setup and maintenance costs, hygiene concerns, and trepidations regarding data privacy, among other apprehensions. Facial recognition is arguably a fast and accurate way of validating identity in order to guard protected areas. It guarantees that only authorized individuals gain access to secure locations while requiring far less personal information whilst providing an additional layer of security beyond keys, fobs, or identity cards. This exploratory study sought to investigate the use of facial recognition in controlling access in institutions of higher learning in Kenya. The sample population was drawn from both private and public higher learning institutions. The data is based on responses from staff and students. Questionnaires were used for data collection and follow up interviews conducted to understand responses from the questionnaires. 80% of the sampled population indicated that there were many security breaches by unauthorized people, with some resulting in terror attacks. These security breaches were attributed to stolen identity cases, where staff or student identity cards were stolen and used by criminals to access the institutions. These unauthorized accesses have resulted in losses to the institutions, including reputational damages. The findings indicate that security breaches are a major problem in institutions of higher learning in Kenya. Consequently, access control would be beneficial if employed to curb security breaches. We suggest the use of facial recognition technology, given its uniqueness in identifying users and its non-repudiation capabilities.

Keywords: facial recognition, access control, technology, learning

Procedia PDF Downloads 103
4453 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 338
4452 Event Related Potentials in Terms of Visual and Auditory Stimuli

Authors: Seokbeen Lim, KyeongSeok Sim, DaKyeong Shin, Gilwon Yoon

Abstract:

Event-related potential (ERP) is one of the useful tools for investigating cognitive reactions. In this study, the potential of ERP components detected after auditory and visual stimuli was examined. Subjects were asked to respond upon stimuli that were of three categories; Target, Non-Target and Standard stimuli. The ERP after stimulus was measured. In the experiment of visual evoked potentials (VEPs), the subjects were asked to gaze at a center point on the monitor screen where the stimuli were provided by the reversal pattern of the checkerboard. In consequence of the VEP experiments, we observed consistent reactions. Each peak voltage could be measured when the ensemble average was applied. Visual stimuli had smaller amplitude and a longer latency compared to that of auditory stimuli. The amplitude was the highest with Target and the smallest with Standard in both stimuli.

Keywords: auditory stimulus, EEG, event related potential, oddball task, visual stimulus

Procedia PDF Downloads 260
4451 Capacity Building on Small Automatic Tracking Antenna Development for Thailand Space Sustainability

Authors: Warinthorn Kiadtikornthaweeyot Evans, Nawattakorn Kaikaew

Abstract:

The communication system between the ground station and the satellite is very important to guarantee contact between both sides. Thailand, led by Geo-Informatics and Space Technology Development Agency (GISTDA), has received satellite images from other nation's satellites for a number of years. In 2008, Thailand Earth Observation Satellite (THEOS) was the first Earth observation satellite owned by Thailand. The mission was monitoring our country with affordable access to space-based Earth imagery. At this time, the control ground station was initially used to control the THEOS satellite by our Thai engineers. The Tele-commands were sent to the satellite according to requests from government and private sectors. Since then, GISTDA's engineers have gained their skill and experience to operate the satellite. Recently the desire to use satellite data is increasing rapidly due to space technology moving fast and giving us more benefits. It is essential to ensure that Thailand remains competitive in space technology. Thai Engineers have started to improve the performance of the control ground station in many different sections, also developing skills and knowledge in areas of satellite communication. Human resource skills are being enforced with development projects through capacity building. This paper focuses on the hands-on capacity building of GISTDA's engineers to develop a small automatic tracking antenna. The final achievement of the project is the first phase prototype of a small automatic tracking antenna to support the new technology of the satellites. There are two main subsystems that have been developed and tested; the tracking system and the monitoring and control software. The prototype first phase functions testing has been performed with Two Line Element (TLE) and the mission planning plan (MPP) file calculated from THEOS satellite by GISTDA.

Keywords: capacity building, small tracking antenna, automatic tracking system, project development procedure

Procedia PDF Downloads 52
4450 Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses

Authors: El Sayed A. Sharara, A. Tsuji, K. Terada

Abstract:

Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection.

Keywords: call center agents, fatigue, skin color detection, face recognition

Procedia PDF Downloads 271
4449 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 82
4448 A Preliminary Study for Design of Automatic Block Reallocation Algorithm with Genetic Algorithm Method in the Land Consolidation Projects

Authors: Tayfun Çay, Yasar İnceyol, Abdurrahman Özbeyaz

Abstract:

Land reallocation is one of the most important steps in land consolidation projects. Many different models were proposed for land reallocation in the literature such as Fuzzy Logic, block priority based land reallocation and Spatial Decision Support Systems. A model including four parts is considered for automatic block reallocation with genetic algorithm method in land consolidation projects. These stages are preparing data tables for a project land, determining conditions and constraints of land reallocation, designing command steps and logical flow chart of reallocation algorithm and finally writing program codes of Genetic Algorithm respectively. In this study, we designed the first three steps of the considered model comprising four steps.

Keywords: land consolidation, landholding, land reallocation, optimization, genetic algorithm

Procedia PDF Downloads 402
4447 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)

Authors: Ziv Kassner, Gal Ribak

Abstract:

Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.

Keywords: bio-mechanics, insect flight, target fixation, tracking and interception

Procedia PDF Downloads 130
4446 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 78