Search results for: patch metrics
405 Node Pair Selection Scheme in Relay-Aided Communication Based on Stable Marriage Problem
Authors: Tetsuki Taniguchi, Yoshio Karasawa
Abstract:
This paper describes a node pair selection scheme in relay-aided multiple source multiple destination communication system based on stable marriage problem. A general case is assumed in which all of source, relay and destination nodes are equipped with multiantenna and carry out multistream transmission. Based on several metrics introduced from inter-node channel condition, the preference order is determined about all source-relay and relay-destination relations, and then the node pairs are determined using Gale-Shapley algorithm. The computer simulations show that the effectiveness of node pair selection is larger in multihop communication. Some additional aspects which are different from relay-less case are also investigated.Keywords: relay, multiple input multiple output (MIMO), multiuser, amplify and forward, stable marriage problem, Gale-Shapley algorithm
Procedia PDF Downloads 397404 An Energy Efficient Clustering Approach for Underwater Wireless Sensor Networks
Authors: Mohammad Reza Taherkhani
Abstract:
Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make a connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.Keywords: underwater sensor networks, clustering, learning automata, energy consumption
Procedia PDF Downloads 361403 A New Concept for Deriving the Expected Value of Fuzzy Random Variables
Authors: Liang-Hsuan Chen, Chia-Jung Chang
Abstract:
Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.Keywords: fuzzy random variables, distance measure, expected value, descriptive parameters
Procedia PDF Downloads 343402 Modelling of Atomic Force Microscopic Nano Robot's Friction Force on Rough Surfaces
Authors: M. Kharazmi, M. Zakeri, M. Packirisamy, J. Faraji
Abstract:
Micro/Nanorobotics or manipulation of nanoparticles by Atomic Force Microscopic (AFM) is one of the most important solutions for controlling the movement of atoms, particles and micro/nano metrics components and assembling of them to design micro/nano-meter tools. Accurate modelling of manipulation requires identification of forces and mechanical knowledge in the Nanoscale which are different from macro world. Due to the importance of the adhesion forces and the interaction of surfaces at the nanoscale several friction models were presented. In this research, friction and normal forces that are applied on the AFM by using of the dynamic bending-torsion model of AFM are obtained based on Hurtado-Kim friction model (HK), Johnson-Kendall-Robert contact model (JKR) and Greenwood-Williamson roughness model (GW). Finally, the effect of standard deviation of asperities height on the normal load, friction force and friction coefficient are studied.Keywords: atomic force microscopy, contact model, friction coefficient, Greenwood-Williamson model
Procedia PDF Downloads 199401 Near-Infrared Optogenetic Manipulation of a Channelrhodopsin via Upconverting Nanoparticles
Authors: Kanchan Yadav, Ai-Chuan Chou, Rajesh Kumar Ulaganathan, Hua-De Gao, Hsien-Ming Lee, Chien-Yuan Pan, Yit-Tsong Chen
Abstract:
Optogenetics is an innovative technology now widely adopted by researchers in different fields of the biological sciences. However, due to the weak tissue penetration capability of the short wavelengths used to activate light-sensitive proteins, an invasive light guide has been used in animal studies for photoexcitation of target tissues. Upconverting nanoparticles (UCNPs), which transform near-infrared (NIR) light to short-wavelength emissions, can help address this issue. To improve optogenetic performance, we enhance the target selectivity for optogenetic controls by specifically conjugating the UCNPs with light-sensitive proteins at a molecular level, which shortens the distance as well as enhances the efficiency of energy transfer. We tagged V5 and Lumio epitopes to the extracellular N-terminal of channelrhodopsin-2 with an mCherry conjugated at the intracellular C-terminal (VL-ChR2m) and then bound NeutrAvidin-functionalized UCNPs (NAv-UCNPs) to the VL-ChR2m via a biotinylated antibody against V5 (bV5-Ab). We observed an apparent energy transfer from the excited UCNP (donor) to the bound VL-ChR2m (receptor) by measuring emission-intensity changes at the donor-receptor complex. The successful patch-clamp electrophysiological test and an intracellular Ca2+ elevation observed in the designed UCNP-ChR2 system under optogenetic manipulation confirmed the practical employment of UCNP-assisted NIR-optogenetic functionality. This work represents a significant step toward improving therapeutic optogenetics.Keywords: Channelrhodopsin-2, near infrared, optogenetics, upconverting nanoparticles
Procedia PDF Downloads 276400 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements
Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath
Abstract:
Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.Keywords: pronunciation variations, dynamic programming, machine learning, natural language processing
Procedia PDF Downloads 175399 A Network Approach to Analyzing Financial Markets
Authors: Yusuf Seedat
Abstract:
The necessity to understand global financial markets has increased following the unfortunate spread of the recent financial crisis around the world. Financial markets are considered to be complex systems consisting of highly volatile move-ments whose indexes fluctuate without any clear pattern. Analytic methods of stock prices have been proposed in which financial markets are modeled using common network analysis tools and methods. It has been found that two key components of social network analysis are relevant to modeling financial markets, allowing us to forecast accurate predictions of stock prices within the financial market. Financial markets have a number of interacting components, leading to complex behavioral patterns. This paper describes a social network approach to analyzing financial markets as a viable approach to studying the way complex stock markets function. We also look at how social network analysis techniques and metrics are used to gauge an understanding of the evolution of financial markets as well as how community detection can be used to qualify and quantify in-fluence within a network.Keywords: network analysis, social networks, financial markets, stocks, nodes, edges, complex networks
Procedia PDF Downloads 191398 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes
Authors: Vincent Liu
Abstract:
Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.Keywords: diabetes, machine learning, 30-day readmission, metaheuristic
Procedia PDF Downloads 61397 Hyper Tuned RBF SVM: Approach for the Prediction of the Breast Cancer
Authors: Surita Maini, Sanjay Dhanka
Abstract:
Machine learning (ML) involves developing algorithms and statistical models that enable computers to learn and make predictions or decisions based on data without being explicitly programmed. Because of its unlimited abilities ML is gaining popularity in medical sectors; Medical Imaging, Electronic Health Records, Genomic Data Analysis, Wearable Devices, Disease Outbreak Prediction, Disease Diagnosis, etc. In the last few decades, many researchers have tried to diagnose Breast Cancer (BC) using ML, because early detection of any disease can save millions of lives. Working in this direction, the authors have proposed a hybrid ML technique RBF SVM, to predict the BC in earlier the stage. The proposed method is implemented on the Breast Cancer UCI ML dataset with 569 instances and 32 attributes. The authors recorded performance metrics of the proposed model i.e., Accuracy 98.24%, Sensitivity 98.67%, Specificity 97.43%, F1 Score 98.67%, Precision 98.67%, and run time 0.044769 seconds. The proposed method is validated by K-Fold cross-validation.Keywords: breast cancer, support vector classifier, machine learning, hyper parameter tunning
Procedia PDF Downloads 67396 What Smart Can Learn about Art
Authors: Faten Hatem
Abstract:
This paper explores the associated understanding of the role and meaning of art and whether it is perceived to be separate from smart city construction. The paper emphasises the significance of fulfilling the inherent need for discovery and interaction, driving people to explore new places and think of works of art. This is done by exploring the ways of thinking and types of art in Milton Keynes by illustrating a general pattern of misunderstanding that relies on the separation between smart, art, and architecture, promoting a better and deeper understanding of the interconnections between neuroscience, art, and architecture. A reflective approach is used to clarify the potential and impact of using art-based research, methodology, and ways of knowing when approaching global phenomena and knowledge production while examining the process of making and developing smart cities, in particular, asserting that factors can severely impact it in the process of conducting the study itself. It follows a case study as a research strategy. The qualitative methods included data collection and analysis that involved interviews and observations that depended on visuals.Keywords: smart cities, art and smart, smart cities design, smart cities making, sustainability, city brain and smart cities metrics, smart cities standards, smart cities applications, governance, planning and policy
Procedia PDF Downloads 116395 Determination of Water Pollution and Water Quality with Decision Trees
Authors: Çiğdem Bakır, Mecit Yüzkat
Abstract:
With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.Keywords: decision tree, water quality, water pollution, machine learning
Procedia PDF Downloads 80394 A Preliminary Survey of Mosses, in Galahitiya, Meneripitiya Grama Niladhari Division in Rathnapura District of Sri Lanka
Authors: B. W. U. Deepashika
Abstract:
Rathnapura is located in the south-western part of Sri Lanka, the so-called wet zone. This area receives rainfall mainly from south-west monsoons from May to September. During the remaining months of the year, there is also a considerable precipitation due to convective rains. The average annual precipitation is about 4,000 to 5,000 mm. The average temperature varies from 24 to 35 °C, and there are high humidity levels. Mosses are one of the important groups of the flora of this region and they are very sensitive to climatic changes. Proper exploration and systematic studies on mosses in many parts of the country have not yet been carried out. Therefore, launching a study on the bryophyte flora of the country has become very important. The preliminary survey of bryophytes was carried out in Galahitiya, Meneripitiya Grama Niladari Division, located in Ratnapura district, in Sabaragamuwa province which is situated 20 kilometres away from Rathnapura. Its geographical coordinates are 6° 35' North, 80° 35' East. Samples were collected from different habitats including home gardens, near the wells, small forest patch, tea land, near the stream, from non-cemented wall, from cement wall, and from ditches. Two small quadrates (1ˣ 1m2) were used in each study site. Taxa were identified up to the generic level using taxonomic keys produced for different geographic regions of the world. In the present survey, a total of 09 mosses belonging to seven families were identified to their generic level. They are Family-Bryaceae (3) (Bryum sp, Brachymenium sp, Pohlia sp), Fissidentaceae (1) (Fissidens sp), Leucobryaceae (1) (Octoblepharum sp), Calymperaceae (1) (Calymperes sp), Polytrichaceae (1) (Pogonatum sp), Pterobryaceae (1) (Pterobryopsis sp), Sematophyllaceae (1) (Taxithelium sp).Keywords: mosses, wet zone, Sabaragamuwa province, Sri Lanka
Procedia PDF Downloads 225393 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries
Procedia PDF Downloads 446392 Video-On-Demand QoE Evaluation across Different Age-Groups and Its Significance for Network Capacity
Authors: Mujtaba Roshan, John A. Schormans
Abstract:
Quality of Experience (QoE) drives churn in the broadband networks industry, and good QoE plays a large part in the retention of customers. QoE is known to be affected by the Quality of Service (QoS) factors packet loss probability (PLP), delay and delay jitter caused by the network. Earlier results have shown that the relationship between these QoS factors and QoE is non-linear, and may vary from application to application. We use the network emulator Netem as the basis for experimentation, and evaluate how QoE varies as we change the emulated QoS metrics. Focusing on Video-on-Demand, we discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. We then used a bottleneck TCP model to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required.Keywords: network capacity, packet loss probability, quality of experience, quality of service
Procedia PDF Downloads 273391 Real-Time Lane Marking Detection Using Weighted Filter
Authors: Ayhan Kucukmanisa, Orhan Akbulut, Oguzhan Urhan
Abstract:
Nowadays, advanced driver assistance systems (ADAS) have become popular, since they enable safe driving. Lane detection is a vital step for ADAS. The performance of the lane detection process is critical to obtain a high accuracy lane departure warning system (LDWS). Challenging factors such as road cracks, erosion of lane markings, weather conditions might affect the performance of a lane detection system. In this paper, 1-D weighted filter based on row filtering to detect lane marking is proposed. 2-D input image is filtered by 1-D weighted filter considering four-pixel values located symmetrically around the center of candidate pixel. Performance evaluation is carried out by two metrics which are true positive rate (TPR) and false positive rate (FPR). Experimental results demonstrate that the proposed approach provides better lane marking detection accuracy compared to the previous methods while providing real-time processing performance.Keywords: lane marking filter, lane detection, ADAS, LDWS
Procedia PDF Downloads 193390 A Learning Automata Based Clustering Approach for Underwater Sensor Networks to Reduce Energy Consumption
Authors: Motahareh Fadaei
Abstract:
Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.Keywords: clustering, energy consumption, learning automata, underwater sensor networks
Procedia PDF Downloads 314389 Base Change for Fisher Metrics: Case of the q-Gaussian Inverse Distribution
Authors: Gabriel I. Loaiza Ossa, Carlos A. Cadavid Moreno, Juan C. Arango Parra
Abstract:
It is known that the Riemannian manifold determined by the family of inverse Gaussian distributions endowed with the Fisher metric has negative constant curvature κ= -1/2, as does the family of usual Gaussian distributions. In the present paper, firstly, we arrive at this result by following a different path, much simpler than the previous ones. We first put the family in exponential form, thus endowing the family with a new set of parameters, or coordinates, θ₁, θ₂; then we determine the matrix of the Fisher metric in terms of these parameters; and finally we compute this matrix in the original parameters. Secondly, we define the inverse q-Gaussian distribution family (q < 3) as the family obtained by replacing the usual exponential function with the Tsallis q-exponential function in the expression for the inverse Gaussian distribution and observe that it supports two possible geometries, the Fisher and the q-Fisher geometry. And finally, we apply our strategy to obtain results about the Fisher and q-Fisher geometry of the inverse q-Gaussian distribution family, similar to the ones obtained in the case of the inverse Gaussian distribution family.Keywords: base of changes, information geometry, inverse Gaussian distribution, inverse q-Gaussian distribution, statistical manifolds
Procedia PDF Downloads 244388 A Hybrid Pareto-Based Swarm Optimization Algorithm for the Multi-Objective Flexible Job Shop Scheduling Problems
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a new hybrid particle swarm optimization algorithm is proposed for the multi-objective flexible job shop scheduling problem that is very important and hard combinatorial problem. The Pareto approach is used for solving the multi-objective problem. Several new local search heuristics are integrated into an algorithm based on the critical block concept to enhance the performance of the algorithm. The algorithm is compared with the recently published multi-objective algorithms based on benchmarks selected from the literature. Several metrics are used for quantifying performance and comparison of the achieved solutions. The algorithms are also compared based on the Weighting summation of objectives approach. The proposed algorithm can find the Pareto solutions more efficiently than the compared algorithms in less computational time.Keywords: swarm-based optimization, local search, Pareto optimality, flexible job shop scheduling, multi-objective optimization
Procedia PDF Downloads 366387 sing Eye Tracking to Measure the Impact of Persuasion Principles in Phishing Emails
Authors: Laura Bishop, Isabel Jones, Linn Halvorsen, Angela Smith
Abstract:
Phishing emails are a form of social engineering where attackers deceive email users into revealing sensitive information or installing malware such as ransomware. Scammers often use persuasion techniques to influence email users to interact with malicious content. This study will use eye-tracking equipment to analyze how participants respond to and process Cialdini’s persuasion principles when utilized within phishing emails. Eye tracking provides insights into what is happening on the subconscious level of the brain that the participant may not be aware of. An experiment is conducted to track participant eye movements, whilst interacting with and then filing a series of persuasive emails delivered at random. Eye tracking metrics will be analyzed in relation to whether a malicious email has been identified as phishing (filed as ‘suspicious’) or not phishing (filed in any other folder). This will help determine the most influential persuasion techniques and those 'areas of interest' within an email that require intervention. The results will aid further research on how to reduce the effects of persuasion on human decision-making when interacting with phishing emails.Keywords: cybersecurity, human-centric, phishing, psychology
Procedia PDF Downloads 83386 Crop Recommendation System Using Machine Learning
Authors: Prathik Ranka, Sridhar K, Vasanth Daniel, Mithun Shankar
Abstract:
With growing global food needs and climate uncertainties, informed crop choices are critical for increasing agricultural productivity. Here we propose a machine learning-based crop recommendation system to help farmers in choosing the most proper crops according to their geographical regions and soil properties. We can deploy algorithms like Decision Trees, Random Forests and Support Vector Machines on a broad dataset that consists of climatic factors, soil characteristics and historical crop yields to predict the best choice of crops. The approach includes first preprocessing the data after assessing them for missing values, unlike in previous jobs where we used all the available information and then transformed because there was no way such a model could have worked with missing data, and normalizing as throughput that will be done over a network to get best results out of our machine learning division. The model effectiveness is measured through performance metrics like accuracy, precision and recall. The resultant app provides a farmer-friendly dashboard through which farmers can enter their local conditions and receive individualized crop suggestions.Keywords: crop recommendation, precision agriculture, crop, machine learning
Procedia PDF Downloads 14385 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 119384 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 65383 Sustainability Performance in the Post-Pandemic Era: Employee Resilience Impact on Improving Employee and Organizational Performance
Authors: Sonali Mohite
Abstract:
Severe changes to Organizational Sustainability (OS) have been brought about by the COVID-19 pandemic. This situation forces organizations to tackle the competencies required to augment Employee Resilience (ER) and make profitable growth. This study explores how employee resilience contributes to both individual and organizational success in the wake of the COVID-19 pandemic. We suggest that employees who possess strong coping mechanisms and adaptability are better equipped to handle ongoing disruptions, resulting in improved individual performance metrics like productivity, engagement, and innovative thinking. Hence, exploring the efficiency of ER in improving EP and OS in post-pandemic (PP) is the aim of this research. By utilizing convenience sampling techniques, a total of 422 employees have been collected from numerous organizations. After that, the study’s hypothesis is analysed by using Structural Equation Modelling (SEM). As per the study’s findings, the ER factors of “Job Satisfaction (JS)”, “Self-Efficacy (SE)”, “Supervisors’ Support (SS)”, and “Facilitating Conditions (FC)” have positive and significant associations with organizational efficiency. Furthermore, the study’s findings also exhibited that there is the most important relation between SE and EOP.Keywords: employee resilience, employee performance, organizational performance, sustainability, post-pandemic
Procedia PDF Downloads 22382 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks
Authors: Amal Khalifa, Nicolas Vana Santos
Abstract:
Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.Keywords: deep learning, steganography, image, discrete wavelet transform, fusion
Procedia PDF Downloads 90381 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 54380 Evaluating Hourly Sulphur Dioxide and Ground Ozone Simulated with the Air Quality Model in Lima, Peru
Authors: Odón R. Sánchez-Ccoyllo, Elizabeth Ayma-Choque, Alan Llacza
Abstract:
Sulphur dioxide (SO₂) and surface-ozone (O₃) concentrations are associated with diseases. The objective of this research is to evaluate the effectiveness of the air-quality-WRF-Chem model with a horizontal resolution of 5 km x 5 km. For this purpose, the measurements of the hourly SO₂ and O₃ concentrations available in three air quality monitoring stations in Lima, Peru were used for the purpose of validating the simulations of the SO₂ and O₃ concentrations obtained with the WRF-Chem model in February 2018. For the quantitative evaluation of the simulations of these gases, statistical techniques were implemented, such as the average of the simulations; the average of the measurements; the Mean Bias (MeB); the Mean Error (MeE); and the Root Mean Square Error (RMSE). The results of these statistical metrics indicated that the simulated SO₂ and O₃ values over-predicted the SO₂ and O₃ measurements. For the SO₂ concentration, the MeB values varied from 0.58 to 26.35 µg/m³; the MeE values varied from 8.75 to 26.5 µg/m³; the RMSE values varied from 13.3 to 31.79 µg/m³; while for O₃ concentrations the statistical values of the MeB varied from 37.52 to 56.29 µg/m³; the MeE values varied from 37.54 to 56.70 µg/m³; the RMSE values varied from 43.05 to 69.56 µg/m³.Keywords: ground-ozone, lima, sulphur dioxide, WRF-chem
Procedia PDF Downloads 137379 Balloon Analogue Risk Task (BART) Performance Indicators Help Predict Outcomes of Matched Savings Program
Authors: Carlos M. Parra, Matthew Sutherland, Ranjita Poudel
Abstract:
Reduced mental-bandwidth related to low socioeconomic status (low-SES) might lead to impulsivity and risk-taking behavior, which poses as a major hurdle towards asset building (savings) behavior. Understanding the relationship between risk-related personality metrics as well as laboratory risk behavior and real-life savings behavior can help facilitate the development of effective asset building programs, which are vital for mitigating financial vulnerability and income inequality. As such, this study explored the relationship between personality metrics, laboratory behavior in a risky decision-making task and real-life asset building (savings) behaviors among individuals with low-SES from Miami, Florida (FL). Study participants (12 male, 15 female) included racially and ethnically diverse adults (mean age 41.22 ± 12.65 years), with incomplete higher education (18% had High School Diploma, 30% Associates, and 52% Some College), and low annual income (mean $13,872 ± $8020.43). Participants completed eight self-report surveys and played a widely used risky decision-making paradigm called the Balloon Analogue Risk Task (BART). Specifically, participants played three runs of BART (20 trials in each run; total 60 trials). In addition, asset building behavior data was collected for 24 participants who opened and used savings accounts and completed a 6-month savings program that involved monthly matches, and a final reward for completing the savings program without any interim withdrawals. Each participant’s total savings at the end of this program was the main asset building indicator considered. In addition, a new effective use of average pump bet (EUAPB) indicator was developed to characterize each participant’s ability to place winning bets. This indicator takes the ratio of each participant’s total BART earnings to average pump bet (APB) in all 60 trials. Our findings indicated that EUAPB explained more than a third of the variation in total savings among participants. Moreover, participants who managed to obtain BART earnings of at least 30 cents out of their APB, also tended to exhibit better asset building (savings) behavior. In particular, using this criterion to separate participants into high and low EUAPB groups, the nine participants with high EUAPB (mean BART earnings of 35.64 cents per APB) ended up with higher mean total savings ($255.11), while the 15 participants with low EUAPB (mean BART earnings of 22.50 cents per APB) obtained lower mean total savings ($40.01). All mean differences are statistically significant (2-tailed p .0001) indicating that the relation between higher EUAPB and higher total savings is robust. Overall, these findings can help refine asset building interventions implemented by policy makers and practitioners interested in reducing financial vulnerability among low-SES population. Specifically, by helping identify individuals who are likely to readily take advantage of savings opportunities (such as matched savings programs) and avoiding the stipulation of unnecessary and expensive financial coaching programs to these individuals. This study was funded by J.P. Morgan Chase (JPMC) and carried out by scientists from Florida International University (FIU) in partnership with Catalyst Miami.Keywords: balloon analogue risk task (BART), matched savings programs, asset building capability, low-SES participants
Procedia PDF Downloads 145378 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum
Authors: Abdulrahman Sumayli, Saad M. AlShahrani
Abstract:
For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectivelyKeywords: temperature, pressure variations, machine learning, oil treatment
Procedia PDF Downloads 69377 Combined Automatic Speech Recognition and Machine Translation in Business Correspondence Domain for English-Croatian
Authors: Sanja Seljan, Ivan Dunđer
Abstract:
The paper presents combined automatic speech recognition (ASR) for English and machine translation (MT) for English and Croatian in the domain of business correspondence. The first part presents results of training the ASR commercial system on two English data sets, enriched by error analysis. The second part presents results of machine translation performed by online tool Google Translate for English and Croatian and Croatian-English language pairs. Human evaluation in terms of usability is conducted and internal consistency calculated by Cronbach's alpha coefficient, enriched by error analysis. Automatic evaluation is performed by WER (Word Error Rate) and PER (Position-independent word Error Rate) metrics, followed by investigation of Pearson’s correlation with human evaluation.Keywords: automatic machine translation, integrated language technologies, quality evaluation, speech recognition
Procedia PDF Downloads 484376 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity
Authors: Julia Baker
Abstract:
Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.Keywords: biodiversity offsetting, development, nature conservation planning, net gain
Procedia PDF Downloads 195