Search results for: image encryption algorithms
783 The Image of Victim and Criminal in Love Crimes on Social Media in Egypt: Facebook Discourse Analysis
Authors: Sherehan Hamdalla
Abstract:
Egypt has experienced a series of terrifying love crimes in the last few months. This ‘trend’ of love crimes started with a young man caught on video slaughtering his ex-girlfriend in the street in the city of El Mansoura. The crime shocked all Egyptian citizens at all levels; unfortunately, not less than three similar crimes took place in other different Egyptian cities with the same killing trigger. The characteristics and easy access and reach of social media consider the reason why it is one of the most crucial online communication channels; users utilize social media platforms for sharing and exchanging ideas, news, and many other activities; they can freely share posts that reflect their mindset or personal views regarding any issues, these posts are going viral in all social media account by reposting or numbers of shares for these posts to support the content included, or even to attack. The repetition of sharing certain posts could mobilize other supporters with the same point of view, especially when that crowd’s online participation is confronting a public opinion case’s consequences. The death of that young woman was followed by similar crimes in other cities, such as El Sharkia and Port Said. These love crimes provoked a massive wave of contention among all social classes in Egypt. Strangely, some were supporting the criminal and defending his side for several reasons, which the study will uncover. Facebook, the most popular social media platform for Egyptians, reflects the debate between supporters of the victim and supporters of the criminal. Facebook pages were created specifically to disseminate certain viewpoints online, for example, asking for the maximum penalty to be given to criminals. These pages aimed to mobilize the maximum number of supporters and to affect the outcome of the trials.Keywords: love crimes, victim, criminal, social media
Procedia PDF Downloads 76782 Final Account Closing in Construction Project: The Use of Supply Chain Management to Reduce the Delays
Authors: Zarabizan Zakaria, Syuhaida Ismail, Aminah Md. Yusof
Abstract:
Project management process starts from the planning stage up to the stage of completion (handover of buildings, preparation of the final accounts and the closing balance). This process is not easy to implement efficiently and effectively. The issue of delays in construction is a major problem for construction projects. These delays have been blamed mainly on inefficient traditional construction practices that continue to dominate the current industry. This is due to several factors, such as environments of construction technology, sophisticated design and customer demands that are constantly changing and influencing, either directly or indirectly, the practice of management. Among the identified influences are physical environment, social environment, information environment, political and moral atmosphere. Therefore, this paper is emerged to determine the problem and issues in the final account closing in construction projects, and it establishes the need to embrace Supply Chain Management (SCM) and then elucidates the need and strategies for the development of a delay reduction framework. At the same time, this paper provides effective measures to avoid or at least reduce the delay to the optimum level. Allowing problems in the closure declaration to occur without proper monitoring and control can leave negative impact on the cost and time of delivery to the end user. Besides, it can also affect the reputation or image of the agency/department that manages the implementation of a contract and consequently may reduce customer's trust towards the agencies/departments. It is anticipated that the findings reported in this paper could address root delay contributors and apply SCM tools for their mitigation for the better development of construction project.Keywords: final account closing, construction project, construction delay, supply chain management
Procedia PDF Downloads 367781 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System
Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu
Abstract:
Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance
Procedia PDF Downloads 477780 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 241779 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 77778 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas
Authors: Anand Malik
Abstract:
The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.Keywords: debris flow, geospatial data, GIS based modeling, flow-R
Procedia PDF Downloads 273777 Effect of Underwater Antiquities as a Hidden Competitive Advantage of Hotels on Their Financial Performance: An Exploratory Study
Authors: Iman Shawky, Mohamed Elsayed
Abstract:
Every hotel works in the hospitality market tends to have its own merit and character in its products marketing in order to maintain both its brand's identity and image among guests. According to the growth of global competition in the hospitality industry; the concept of competitive advantage is becoming increasingly important in hotels' marketing world as it examines reasons for outweighing hotels in their dimensions of strategic and marketing plans. In fact, Egypt is the land of appeared and submerged secrets as a result of its ancient civilization ongoing explorations. Although underwater antiquities represent ambiguous treasures, they have auspicious future in it, particularly in Alexandria. The study aims at examining to what extent underwater antiquities represent a competitive advantage of four and five-star hotels in Alexandria. For achieving this aim, an exploratory study conducted by currying out the investigation and comparison of the closest and most popular landmarks mentioned on both hotels' official websites and on common used reservations' websites. In addition to that, two different questionnaire forms designed; one for both revenue and sales and marketing hotels' managers while the other for their guests. The results indicate that both official hotels' websites and the most common used reservations' websites totally ignore mentioning underwater antiquities as attractive landmarks surrounding Alexandria hotels. Furthermore, most managers expect that underwater antiquities can furnish distinguished competitive advantage to their hotels. Also, they can help exceeding guests' expectations during their accommodation as long as they included on both official hotels' and reservations' websites as the most surrounding famous landmarks. Moreover, most managers foresee that high awareness of underwater antiquities can enhance the guests' accommodation frequencies and improve the financial performance of their hotels.Keywords: competitive advantage, financial performance, hotels' websites, underwater antiquities
Procedia PDF Downloads 167776 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 178775 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor
Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh
Abstract:
Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging
Procedia PDF Downloads 260774 Drape Simulation by Commercial Software and Subjective Assessment of Virtual Drape
Authors: Evrim Buyukaslan, Simona Jevsnik, Fatma Kalaoglu
Abstract:
Simulation of fabrics is more difficult than any other simulation due to complex mechanics of fabrics. Most of the virtual garment simulation software use mass-spring model and incorporate fabric mechanics into simulation models. The accuracy and fidelity of these virtual garment simulation software is a question mark. Drape is a subjective phenomenon and evaluation of drape has been studied since 1950’s. On the other hand, fabric and garment simulation is relatively new. Understanding drape perception of subjects when looking at fabric simulations is critical as virtual try-on becomes more of an issue by enhanced online apparel sales. Projected future of online apparel retailing is that users may view their avatars and try-on the garment on their avatars in the virtual environment. It is a well-known fact that users will not be eager to accept this innovative technology unless it is realistic enough. Therefore, it is essential to understand what users see when they are displaying fabrics in a virtual environment. Are they able to distinguish the differences between various fabrics in virtual environment? The purpose of this study is to investigate human perception when looking at a virtual fabric and determine the most visually noticeable drape parameter. To this end, five different fabrics are mechanically tested, and their drape simulations are generated by commercial garment simulation software (Optitex®). The simulation images are processed by an image analysis software to calculate drape parameters namely; drape coefficient, node severity, and peak angles. A questionnaire is developed to evaluate drape properties subjectively in a virtual environment. Drape simulation images are shown to 27 subjects and asked to rank the samples according to their questioned drape property. The answers are compared to the calculated drape parameters. The results show that subjects are quite sensitive to drape coefficient changes while they are not very sensitive to changes in node dimensions and node distributions.Keywords: drape simulation, drape evaluation, fabric mechanics, virtual fabric
Procedia PDF Downloads 339773 Synthesis, Characterization of Organic and Inorganic Zn-Al Layered Double Hydroxides and Application for the Uptake of Methyl Orange from Aqueous Solution
Authors: Fatima Zahra Mahjoubi, Abderrahim Khalidi, Mohammed Abdennouri, Noureddine Barka
Abstract:
Zn-Al layered double hydroxides containing carbonate, nitrate and dodecylsulfate as the interlamellar anions have been prepared through a coprecipitation method. The resulting compounds were characterized using XRD, ICP, FTIR, TGA/DTA, TEM/EDX and pHPZC analysis. The XRD patterns revealed that carbonate and nitrate could be intercalated into the interlayer structure with basal spacing of 22.74 and 26.56 Å respectively. Bilayer intercalation of dodecylsulfate molecules was achieved in Zn-Al LDH with a basal spacing of 37.86 Å. The TEM observation indicated that the materials synthesized via coprecipitation present nanoscale LDH particle. The average particle size of Zn-AlCO3 is 150 to 200 nm. Irregular circular to hexagonal shaped particles with 30 to 40 nm in diameter was observed in the Zn-AlNO3 morphology. TEM image of Zn-AlDs display nanostructured sheet like particles with size distribution between 5 to 10 nm. The sorption characteristics and mechanisms of methyl orange dye on organic LDH were investigated and were subsequently compared with that on the inorganic Zn-Al layered double hydroxides. Adsorption experiments for MO were carried out as function of solution pH, contact time and initial dye concentration. The adsorption behavior onto inorganic LDHs was obviously influenced by initial pH. However, the adsorption capacity of organic LDH was influenced indistinctively by initial pH and the removal percentage of MO was practically constant at various value of pH. As the MO concentration increased, the curve of adsorption capacity became L-type onto LDHs. The adsorption behavior for Zn-AlDs was proposed by the dissolution of dye in a hydrophobic interlayer region (i.e., adsolubilization). The results suggested that Zn-AlDs could be applied as a potential adsorbent for MO removal in a wide range of pH.Keywords: adsorption, dodecylsulfate, kinetics, layered double hydroxides, methyl orange removal
Procedia PDF Downloads 293772 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification
Authors: Ian Omung'a
Abstract:
Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision
Procedia PDF Downloads 93771 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 89770 Exploring the Spatial Relationship between Built Environment and Ride-hailing Demand: Applying Street-Level Images
Authors: Jingjue Bao, Ye Li, Yujie Qi
Abstract:
The explosive growth of ride-hailing has reshaped residents' travel behavior and plays a crucial role in urban mobility within the built environment. Contributing to the research of the spatial variation of ride-hailing demand and its relationship to the built environment and socioeconomic factors, this study utilizes multi-source data from Haikou, China, to construct a Multi-scale Geographically Weighted Regression model (MGWR), considering spatial scale heterogeneity. The regression results showed that MGWR model was demonstrated superior interpretability and reliability with an improvement of 3.4% on R2 and from 4853 to 4787 on AIC, compared with Geographically Weighted Regression model (GWR). Furthermore, to precisely identify the surrounding environment of sampling point, DeepLabv3+ model is employed to segment street-level images. Features extracted from these images are incorporated as variables in the regression model, further enhancing its rationality and accuracy by 7.78% improvement on R2 compared with the MGWR model only considered region-level variables. By integrating multi-scale geospatial data and utilizing advanced computer vision techniques, this study provides a comprehensive understanding of the spatial dynamics between ride-hailing demand and the urban built environment. The insights gained from this research are expected to contribute significantly to urban transportation planning and policy making, as well as ride-hailing platforms, facilitating the development of more efficient and effective mobility solutions in modern cities.Keywords: travel behavior, ride-hailing, spatial relationship, built environment, street-level image
Procedia PDF Downloads 81769 Comparative Ethnography and Urban Health: A Multisite Study on Obesogenic Cities
Authors: Carlos Rios Llamas
Abstract:
Urban health challenges, like the obesity epidemic, need to be studied from a dialogue between different disciplines and geographical conditions. Public health uses quantitative analysis and local samples, but qualitative data and multisite analysis would help to better understand how obesity has become a health problem. In the last decades, obesity rates have increased in most of the countries, especially in the Western World. Concerned about the problem, the American Medical Association has recently voted obesity as a disease. Suddenly, a ‘war on obesity’ attracted scientists from different disciplines to explore various ways to control and even reverse the trends. Medical sciences have taken the advance with quantitative methodologies focused on individual behaviors. Only a few scientist have extended their studies to the environment where obesity is produced as social risk, and less of them have taken into consideration the political and cultural aspects. This paper presents a multisite ethnography in South Bronx, USA, La Courneuve, France, and Lomas del Sur, Mexico, where obesity rates are as relevant as urban degradation. The comparative ethnography offers a possibility to unveil the mechanisms producing health risks from the urban tissue. The analysis considers three main categories: 1) built environment and access to food and physical activity, 2) biocultural construction of the healthy body, 3) urban inequalities related to health and body size. Major findings from a comparative ethnography on obesogenic environments, refer to the anthropological values related to food and body image, as well as the multidimensional oppression expressed in fat people who live in stigmatized urban zones. At the end, obesity, like many other diseases, is the result of political and cultural constructions structured in urbanization processes.Keywords: comparative ethnography, urban health, obesogenic cities, biopolitics
Procedia PDF Downloads 246768 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 59767 Bismuth Telluride Topological Insulator: Physical Vapor Transport vs Molecular Beam Epitaxy
Authors: Omar Concepcion, Osvaldo De Melo, Arturo Escobosa
Abstract:
Topological insulator (TI) materials are insulating in the bulk and conducting in the surface. The unique electronic properties associated with these surface states make them strong candidates for exploring innovative quantum phenomena and as practical applications for quantum computing, spintronic and nanodevices. Many materials, including Bi₂Te₃, have been proposed as TIs and, in some cases, it has been demonstrated experimentally by angle-resolved photoemission spectroscopy (ARPES), scanning tunneling spectroscopy (STM) and/or magnetotransport measurements. A clean surface is necessary in order to make any of this measurements. Several techniques have been used to produce films and different kinds of nanostructures. Growth and characterization in situ is usually the best option although cleaving the films can be an alternative to have a suitable surface. In the present work, we report a comparison of Bi₂Te₃ grown by physical vapor transport (PVT) and molecular beam epitaxy (MBE). The samples were characterized by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS) and ARPES. The Bi₂Te₃ samples grown by PVT, were cleaved in the ultra-high vacuum in order to obtain a surface free of contaminants. In both cases, the XRD shows a c-axis orientation and the pole diagrams proved the epitaxial relationship between film and substrate. The ARPES image shows the linear dispersion characteristic of the surface states of the TI materials. The samples grown by PVT, a relatively simple and cost-effective technique shows the same high quality and TI properties than the grown by MBE.Keywords: Bismuth telluride, molecular beam epitaxy, physical vapor transport, topological insulator
Procedia PDF Downloads 192766 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 93765 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 358764 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model
Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis
Abstract:
Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry
Procedia PDF Downloads 225763 A Review of Accuracy Optical Surface Imaging Systems for Setup Verification During Breast Radiotherapy Treatment
Authors: Auwal Abubakar, Ahmed Ahidjo, Shazril Imran Shaukat, Noor Khairiah A. Karim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: The use of optical surface imaging systems (OSISs) is increasingly becoming popular in radiotherapy practice, especially during breast cancer treatment. This study reviews the accuracy of the available commercial OSISs for breast radiotherapy. Method: A literature search was conducted and identified the available commercial OSISs from different manufacturers that are integrated into radiotherapy practice for setup verification during breast radiotherapy. Studies that evaluated the accuracy of the OSISs during breast radiotherapy using cone beam computed tomography (CBCT) as a reference were retrieved and analyzed. The physics and working principles of the systems from each manufacturer were discussed together with their respective strength and limitations. Results: A total of five (5) different commercially available OSISs from four (4) manufacturers were identified, each with a different working principle. Six (6) studies were found to evaluate the accuracy of the systems during breast radiotherapy in conjunction with CBCT as a goal standard. The studies revealed that the accuracy of the system in terms of mean difference ranges from 0.1 to 2.1 mm. The correlation between CBCT and OSIS ranges between 0.4 and 0.9. The limit of agreements obtained using bland Altman analysis in the studies was also within an acceptable range. Conclusion: The OSISs have an acceptable level of accuracy and could be used safely during breast radiotherapy. The systems are non-invasive, ionizing radiation-free, and provide real-time imaging of the target surface at no extra concomitant imaging dose. However, the system should only be used to complement rather than replace x-ray-based image guidance techniques such as CBCT.Keywords: optical surface imaging system, Cone beam computed tomography (CBCT), surface guided radiotherapy, Breast radiotherapy
Procedia PDF Downloads 66762 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 168761 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data
Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao
Abstract:
Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing
Procedia PDF Downloads 440760 Experimental Study of Unconfined and Confined Isothermal Swirling Jets
Authors: Rohit Sharma, Fabio Cozzi
Abstract:
A 3C-2D PIV technique was applied to investigate the swirling flow generated by an axial plus tangential type swirl generator. This work is focused on the near-exit region of an isothermal swirling jet to characterize the effect of swirl on the flow field and to identify the large coherent structures both in unconfined and confined conditions for geometrical swirl number, Sg = 4.6. Effects of the Reynolds number on the flow structure were also studied. The experimental results show significant effects of the confinement on the mean velocity fields and its fluctuations. The size of the recirculation zone was significantly enlarged upon confinement compared to the free swirling jet. Increasing in the Reynolds number further enhanced the recirculation zone. The frequency characteristics have been measured with a capacitive microphone which indicates the presence of periodic oscillation related to the existence of precessing vortex core, PVC. Proper orthogonal decomposition of the jet velocity field was carried out, enabling the identification of coherent structures. The time coefficients of the first two most energetic POD modes were used to reconstruct the phase-averaged velocity field of the oscillatory motion in the swirling flow. The instantaneous minima of negative swirl strength values calculated from the instantaneous velocity field revealed the presence of two helical structures located in the inner and outer shear layers and this structure fade out at an axial location of approximately z/D = 1.5 for unconfined case and z/D = 1.2 for confined case. By phase averaging the instantaneous swirling strength maps, the 3D helical vortex structure was reconstructed.Keywords: acoustic probes, 3C-2D particle image velocimetry (PIV), precessing vortex core (PVC), recirculation zone (RZ)
Procedia PDF Downloads 233759 Fabrication and Characterization Analysis of La-Sr-Co-Fe-O Perovskite Hollow Fiber Catalyst for Oxygen Removal in Landfill Gas
Authors: Seong Woon Lee, Soo Min Lim, Sung Sik Jeong, Jung Hoon Park
Abstract:
The atmospheric concentration of greenhouse gas (GHG, Green House Gas) is increasing continuously as a result of the combustion of fossil fuels and industrial development. In response to this trend, many researches have been conducted on the reduction of GHG. Landfill gas (LFG, Land Fill Gas) is one of largest sources of GHG emissions containing the methane (CH₄) as a major constituent and can be considered renewable energy sources as well. In order to use LFG by connecting to the city pipe network, it required a process for removing impurities. In particular, oxygen must be removed because it can cause corrosion of pipes and engines. In this study, methane oxidation was used to eliminate oxygen from LFG and perovskite-type ceramic catalysts of La-Sr-Co-Fe-O composition was selected as a catalyst. Hollow fiber catalysts (HFC, Hollow Fiber Catalysts) have attracted attention as a new concept alternative because they have high specific surface area and mechanical strength compared to other types of catalysts. HFC was prepared by a phase-inversion/sintering technique using commercial La-Sr-Co-Fe-O powder. In order to measure the catalysts' activity, simulated LFG was used for feed gas and complete oxidation reaction of methane was confirmed. Pore structure of the HFC was confirmed by SEM image and perovskite structure of single phase was analyzed by XRD. In addition, TPR analysis was performed to verify the oxygen adsorption mechanism of the HFC. Acknowledgement—The project is supported by the ‘Global Top Environment R&D Program’ in the ‘R&D Center for reduction of Non-CO₂ Greenhouse gases’ (Development and demonstration of oxygen removal technology of landfill gas) funded by Korea Ministry of Environment (ME).Keywords: complete oxidation, greenhouse gas, hollow fiber catalyst, land fill gas, oxygen removal, perovskite catalyst
Procedia PDF Downloads 117758 The Impact of the Core Competencies in Business Management to the Existence and Progress of Traditional Foods Business with the Case of Study: Gudeg Sagan Yogyakarta
Authors: Lutfi AuliaRahman, Hari Rizki Ananda
Abstract:
The traditional food is a typical food of a certain region that has a taste of its own unique and typically consumed by a society in certain areas, one of which is Gudeg, a regional specialties traditional food of Yogyakarta and Central Java which is made of young jackfruit cooked in coconut milk, edible with rice and served with thick coconut milk (areh), chicken, eggs, tofu and sambal goreng krecek. However, lately, the image of traditional food has declined among people, so with gudeg, which today's society, especially among young people, tend to prefer modern types of food such as fast food and some other foods that are popular. Moreover, traditional food usually only preferred by consumers of local communities and lack of demand by consumers from different areas for different tastes. Thus, the traditional food producers increasingly marginalized and their consumers are on the wane. This study aimed to evaluate the management used by producers of traditional food with a case study of Gudeg Sagan which located in the city of Yogyakarta, with the ability of their management in creating core competencies, which includes the competence of cost, competence of flexibility, competence of quality, competence of time, and value-based competence. And then, in addition to surviving and continuing to exist with the existing external environment, Gudeg Sagan can increase the number of consumers and also reach a broader segment of teenagers and adults as well as consumers from different areas. And finally, in this paper will be found positive impact on the creation of the core competencies of the existence and progress of the traditional food business based on case study of Gudeg Sagan.Keywords: Gudeg Sagan, traditional food, core competencies, existence
Procedia PDF Downloads 252757 Human-Automation Interaction in Law: Mapping Legal Decisions and Judgments, Cognitive Processes, and Automation Levels
Authors: Dovile Petkeviciute-Barysiene
Abstract:
Legal technologies not only create new ways for accessing and providing legal services but also transform the role of legal practitioners. Both lawyers and users of legal services expect automated solutions to outperform people with objectivity and impartiality. Although fairness of the automated decisions is crucial, research on assessing various characteristics of automated processes related to the perceived fairness has only begun. One of the major obstacles to this research is the lack of comprehensive understanding of what legal actions are automated and could be meaningfully automated, and to what extent. Neither public nor legal practitioners oftentimes cannot envision technological input due to the lack of general without illustrative examples. The aim of this study is to map decision making stages and automation levels which are and/or could be achieved in legal actions related to pre-trial and trial processes. Major legal decisions and judgments are identified during the consultations with legal practitioners. The dual-process model of information processing is used to describe cognitive processes taking place while making legal decisions and judgments during pre-trial and trial action. Some of the existing legal technologies are incorporated into the analysis as well. Several published automation level taxonomies are considered because none of them fit well into the legal context, as they were all created for avionics, teleoperation, unmanned aerial vehicles, etc. From the information processing perspective, analysis of the legal decisions and judgments expose situations that are most sensitive to cognitive bias, among others, also help to identify areas that would benefit from the automation the most. Automation level analysis, in turn, provides a systematic approach to interaction and cooperation between humans and algorithms. Moreover, an integrated map of legal decisions and judgments, information processing characteristics, and automation levels all together provide some groundwork for the research of legal technology perceived fairness and acceptance. Acknowledgment: This project has received funding from European Social Fund (project No 09.3.3-LMT-K-712-19-0116) under grant agreement with the Research Council of Lithuania (LMTLT).Keywords: automation levels, information processing, legal judgment and decision making, legal technology
Procedia PDF Downloads 142756 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs
Authors: Verónica Díaz
Abstract:
A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.Keywords: cartesian graphs, higher education, movement modeling, problem solving
Procedia PDF Downloads 218755 Human and Environment Coevolution: The Chalcolithic Tell Settlements from Muntenia and Dobrogea, South-Eastern Romania
Authors: Constantin Haita
Abstract:
The chalcolithic tell settlements from south-eastern Romania, attributed to Gumelnița culture, are characterised by a well-defined surface, marked often by delimitation structures, a succession of many layers of construction, destruction, and rebuilding, and a well-structured area of occupation: built spaces, passage areas, waste zones. Settlements of tell type are located in the river valleys –on erosion remnants, alluvial bars or small islands, at the border of the valleys– on edges or prominences of Pleistocene terraces, lower Holocene terraces, and banks of lakes. This study integrates data on the geographical position, the morphological background, and the general stratigraphy of these important settlements. The correlation of the spatial distribution with the geomorphological units of each area of evolution creates an image of the natural landscape in which they occurred. The sedimentological researches achieved in the floodplain area of Balta Ialomiței showed important changes in the alluvial activity of Danube, after the Chalcolithic period (ca. 6500 - 6000 BP), to Iron Age and Middle Ages. The micromorphological analysis, consisting in thin section interpretation, at the microscopic scale, of sediments and soils in an undisturbed state, allowed the interpretation of the identified sedimentary facies, in terms of mode of formation and anthropic activities. Our studied cases reflect some distinct situations, correlating either with the geomorphological background or with the vertical development, the presence of delimiting structures and the internal organization. The characteristics of tells from this area bring significant information about the human habitation of Lower Danube in Prehistory.Keywords: chalcolithic, micromorphology, Romania, sedimentology, tell settlements
Procedia PDF Downloads 149754 Bean in Turkey: Characterization, Inter Gene Pool Hybridization Events, Breeding, Utilizations
Authors: Faheem Shahzad Baloch, Muhammad Azhar Nadeem, Muhammad Amjad Nawaz, Ephrem Habyarimana, Gonul Comertpay, Tolga Karakoy, Rustu Hatipoglu, Mehmet Zahit Yeken, Vahdettin Ciftci
Abstract:
Turkey is considered a bridge between Europe, Asia, and Africa and possibly played an important role in the distribution of many crops including common bean. Hundreds of common bean landraces can be found in Turkey, particularly in farmers’ fields, and they consistently contribute to the overall production. To investigate the existing genetic diversity and hybridization events between the Andean and Mesoamerican gene pools in the Turkish common bean, 188 common bean accessions (182 landraces and 6 modern cultivars as controls) were collected from 19 different Turkish geographic regions. These accessions were characterized using phenotypic data (growth habit and seed weight), geographic provenance, 12557 high-quality whole-genome DArTseq markers, and 3767 novel DArTseq loci were also identified. The clustering algorithms resolved the Turkish common bean landrace germplasm into the two recognized gene pools, the Mesoamerican and Andean gene pools. Hybridization events were observed in both gene pools (14.36% of the accessions) but mostly in the Mesoamerican (7.97% of the accessions), and was low relative to previous European studies. The lower level of hybridization witnessed the existence of Turkish common bean germplasm in its original form as compared to Europe. Mesoamerican gene pool reflected a higher level of diversity, while the Andean gene pool was predominant (56.91% of the accessions), but genetically less diverse and phenotypically more pure, reflecting farmers greater preference for the Andean gene pool. We also found some genetically distinct landraces and overall, a meaningful level of genetic variability which can be used by the scientific community in breeding efforts to develop superior common bean strains.Keywords: bean germplasm, DArTseq markers, genotyping by sequencing, Turkey, whole genome diversity
Procedia PDF Downloads 243