Search results for: informative theoretic similarity metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1451

Search results for: informative theoretic similarity metrics

1121 Organotin (IV) Based Complexes as Promiscuous Antibacterials: Synthesis in vitro, in Silico Pharmacokinetic, and Docking Studies

Authors: Wajid Rehman, Sirajul Haq, Bakhtiar Muhammad, Syed Fahad Hassan, Amin Badshah, Muhammad Waseem, Fazal Rahim, Obaid-Ur-Rahman Abid, Farzana Latif Ansari, Umer Rashid

Abstract:

Five novel triorganotin (IV) compounds have been synthesized and characterized. The tin atom is penta-coordinated to assume trigonal-bipyramidal geometry. Using in silico derived parameters; the objective of our study is to design and synthesize promiscuous antibacterials potent enough to combat resistance. Among various synthesized organotin (IV) complexes, compound 5 was found as potent antibacterial agent against various bacterial strains. Further lead optimization of drug-like properties was evaluated through in silico predictions. Data mining and computational analysis were utilized to derive compound promiscuity phenomenon to avoid drug attrition rate in designing antibacterials. Xanthine oxidase and human glucose- 6-phosphatase were found as only true positive off-target hits by ChEMBL database and others utilizing similarity ensemble approach. Propensity towards a-3 receptor, human macrophage migration factor and thiazolidinedione were found as false positive off targets with E-value 1/4> 10^-4 for compound 1, 3, and 4. Further, displaying positive drug-drug interaction of compound 1 as uricosuric was validated by all databases and docked protein targets with sequence similarity and compositional matrix alignment via BLAST software. Promiscuity of the compound 5 was further confirmed by in silico binding to different antibacterial targets.

Keywords: antibacterial activity, drug promiscuity, ADMET prediction, metallo-pharmaceutical, antimicrobial resistance

Procedia PDF Downloads 479
1120 Measurement of Innovation Performance

Authors: M. Chobotová, Ž. Rylková

Abstract:

Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.

Keywords: innovation, performance, measurements metrics, indices

Procedia PDF Downloads 353
1119 Towards Law Data Labelling Using Topic Modelling

Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran

Abstract:

The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.

Keywords: courts of accounts, data labelling, document similarity, topic modeling

Procedia PDF Downloads 152
1118 The Impact of Riparian Alien Plant Removal on Aquatic Invertebrate Communities in the Upper Reaches of Luvuvhu River Catchment, Limpopo Province

Authors: Rifilwe Victor Modiba, Stefan Hendric Foord

Abstract:

Alien invasive plants (IAP’s) have considerable negative impacts on freshwater habitats and South Africa has implemented an innovative Work for Water (WfW) programme for the systematic removal of these plants aimed at, amongst other objectives, restoring biodiversity and ecosystem services in these threatened habitats. These restoration processes are expensive and have to be evidence-based. In this study in-stream macroinvertebrate and adult Odonata assemblages were used as indicators of restoration success by quantifying the response of biodiversity metrics for these two groups to the removal of IAP’s in a strategic water resource of South Africa that is extensively invaded by invasive alien plants (IAP’s). The study consisted of a replicated design that included 45 sampling units, viz. 15 invaded, 15 uninvaded and 15 cleared sites stratified across the upper reaches of six sub-catchments of the Luvuvhu river catchment, Limpopo Province. Cleared sites were only considered if they received at least two WfW treatments in the last 3 years. The Benthic macroinvertebrate and adult Odonate assemblages in each of these sampling were surveyed from between November and March, 2013/2014 and 2014/2015 respectively. Generalized Linear Models (GLM) with a log link function and Poisson error distribution were done for metrics (invaded, cleared, and uninvaded) whose residuals were not normally distributed or had unequal variance and for abundance. RDA was done for EPTO genera (Ephemeroptera, Plecoptera, Trichoptera and Odonata) and adult Odonata species abundance. GLM was done to for the abundance of Genera and Odonates that had the association with the RDA environmental factors. Sixty four benthic macroinvertebrate families, 57 EPTO genera, and 45 adult Odonata species were recorded across all 45 sampling units. There was no significant difference between the SASS5 total score, ASPT, and family richness of the three invasion classes. Although clearing only had a weak positive effect on the adult Odonate species richness it had a positive impact on DBI scores. These differences were mainly the result of significantly larger DBI scores in the cleared sites as compared to the invaded sites. Results suggest that water quality is positively impacted by repeated clearing pointing to the importance of follow up procedures after initial clearing. Adult Odonate diversity as measured by richness, endemicity, threat and distribution respond positively to all forms of the clearing. The clearing had a significant impact on Odonate assemblage structure but did not affect EPTO structure. Variation partitioning showed that 21.8% of the variation in EPTO assemblage can be explained by spatial and environmental variables, 16% of the variation in Odonate structure was explained by spatial and environmental variables. The response of the diversity metrics to clearing increased in significance at finer taxonomic resolutions, particularly of adult Odonates whose metrics significantly improved with clearing and whose structure responded to both invasion and clearing. The study recommends the use of DBI for surveying river health when hydraulic biotopes are poor.

Keywords: DBI, evidence-based conservation, EPTO, macroinvetebrates

Procedia PDF Downloads 169
1117 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 225
1116 Isolation and Identification of Probiotic Lactic Acid Bacteria with Cholesterol Lowering Potential and Their Use in Fermented Milk Product

Authors: Preeyarach Whisetkhan, Malai Taweechotipatr, Ulisa Pachekrepapol

Abstract:

Elevated level of blood cholesterol or hypercholesterolemia may lead to atherosclerosis and poses a major risk for cardiovascular diseases. Probiotics play a crucial role in human health, and probiotic bacteria that possesses bile salt hydrolase (BSH) activity can be used to lower cholesterol level of the host. The aim of this study was to investigate whether lactic acid bacteria (LAB) isolated from traditional Thai fermented foods were able to exhibit bile salt hydrolase activity and their use in fermented milk. A total of 28 isolates were tested for BSH activity by plate method on MRS agar supplemented with 0.5% sodium salt of taurodeoxycholic acid and incubated at 37°C for 48 h under anaerobic condition. The results showed that FN1-1 and FN23-3 isolates possessed strong BSH activity. FN1-1 and FN23-3 isolates were then identified for phenotype, biochemical characteristics, and genotype (16S rRNA sequencing). FN1-1 isolate showed 99.92% similarity to Lactobacillus pentosus DSM 20314(T), while FN23-3 isolate showed 99.94% similarity to Enterococcus faecium CGMCC1.2136 (T). Lactobacillus pentosus FN1-1 and Enterococcus faecium FN23-3 were tolerant of pH 3-4 and 0.3 and 0.8% bile. Bacterial count and pH of milk fermented with Lactobacillus pentosus FN1-1 at 37°C and 43°C were investigated. The results revealed that Lactobacillus pentosus FN1-1 was able to grow in milk, which led to decrease in pH level. Fermentation at 37°C resulted in faster growth rate than at 43 °C. Lactobacillus pentosus FN1-1 was a candidate probiotic to be used in fermented milk products to reduce the risk of high-cholesterol diseases.

Keywords: probiotics, lactic acid bacteria, bile salt hydrolase, cholesterol

Procedia PDF Downloads 131
1115 Effects of Microbial Biofertilization on Nodulation, Nitrogen Fixation, and Yield of Lablab purpureus

Authors: Benselama Amel, Ounane S. Mohamed, Bekki Abdelkader

Abstract:

A collection of 20 isolates from fresh Nodules of the legume plant Lablab purpureus was isolated. These isolates have been authenticated by seedling inoculation grown in jars containing sand. The results obtained after two months of culture have revealed that the 20 isolates (100% of the isolates) are able to nodulate their host plants. The results obtained were analyzed statistically by ANOVA using the software statistica and had shown that the effect of the inoculation has significantly improved all the growth parameters (the height of the plant and the dry weight of the aerial parts and roots, and the number of nodules). We have evaluated the tolerance of all strains of the collection to the major stress factors as the salinity, pH and extreme temperature. The osmotolerance reached a concentration up to 1710mm of NaCl. The strains were also able to grow on a wide range of pH, ranging from 4.5 to 9.5, and temperature, between 4°C and 40°C. Also, we tested the effect of the acidity, aluminum and ferric deficit on the Lablab-rhizobia symbiosis. Lablab purpureus has not been affected by the presence of high concentrations of aluminum. On the other hand, iron deficiency has caused a net decrease in the dry biomass of the aerial part. The results of all the phenotypic characters have been treated by the statistical Minitab software, the numerical analysis had shown that these bacterial strains are divided into two distinct groups at a level of similarity of 86 %. The SDS-PAGE was carried out to determine the profile of the total protein of the strains. The coefficients of similarity of polypeptide bands between the isolates and strains reference (Bradyrhizobium, Mesorizobium sp.) confirm that our strain belongs to the groups of rhizobia.

Keywords: SDS-PAGE, rhizobia, symbiosis, phenotypic characterization, Lablab purpureus

Procedia PDF Downloads 278
1114 Performance Practices in Classic Piano Music

Authors: Mahdi Kazemi

Abstract:

Today's performances on Piano Forte or Fortepiano are cheerful, musical, expressive, and at the same time informative. AlterMuskie is an exciting and richly drawn magazine that is unmatched in its field. First published in 1973, it is a magazine for anyone interested in early music and its contemporary interpretation. Alexander Scriabin's (1871_1915) work has traditionally focused on his music in the mid and late 1902s. The discussion of his personal philosophy and his influence on music also focuses on these two periods. Over the last few decades, the repertoire of British classical solo pianos has received increasing interest from researchers. From the piano rolls of the early 20th century, much can be inferred about the practice of romantic piano playing. Summary Haydn's most important piano works are the sonatas, which generally represent Haydn's development as a composer from the early to the last three sonata dates, 1794.

Keywords: piano, classic piano, performance, music

Procedia PDF Downloads 161
1113 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 107
1112 A Structured Mechanism for Identifying Political Influencers on Social Media Platforms Top 10 Saudi Political Twitter Users

Authors: Ahmad Alsolami, Darren Mundy, Manuel Hernandez-Perez

Abstract:

Social media networks, such as Twitter, offer the perfect opportunity to either positively or negatively affect political attitudes on large audiences. A most important factor contributing to this effect is the existence of influential users, who have developed a reputation for their awareness and experience on specific subjects. Therefore, knowledge of the mechanisms to identify influential users on social media is vital for understanding their effect on their audience. The concept of the influential user is based on the pioneering work of Katz and Lazarsfeld (1959), who created the concept of opinion leaders' to indicate that ideas first flow from mass media to opinion leaders and then to the rest of the population. Hence, the objective of this research was to provide reliable and accurate structural mechanisms to identify influential users, which could be applied to different platforms, places, and subjects. Twitter was selected as the platform of interest, and Saudi Arabia as the context for the investigation. These were selected because Saudi Arabia has a large number of Twitter users, some of whom are considerably active in setting agendas and disseminating ideas. The study considered the scientific methods that have been used to identify public opinion leaders before, utilizing metrics software on Twitter. The key findings propose multiple novel metrics to compare Twitter influencers, including the number of followers, social authority and the use of political hashtags, and four secondary filtering measures. Thus, using ratio and percentage calculations to classify the most influential users, Twitter accounts were filtered, analyzed and included. The structured approach is used as a mechanism to explore the top ten influencers on Twitter from the political domain in Saudi Arabia.

Keywords: twitter, influencers, structured mechanism, Saudi Arabia

Procedia PDF Downloads 113
1111 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 150
1110 Model-Viewer for Setting Interactive 3D Objects of Electronic Devices and Systems

Authors: Julio Brégains, Ángel Carro, José-Manuel Andión

Abstract:

Virtual 3D objects constitute invaluable tools for teaching practical engineering subjects at all -from basic to advanced- educational levels. For instance, they can be equipped with animation or informative labels, manipulated by mouse movements, and even be immersed in a real environment through augmented reality. In this paper, we present the investigation and description of a set of applications prepared for creating, editing, and making use of interactive 3D models to represent electric and electronic devices and systems. Several examples designed with the described tools are exhibited, mainly to show their capabilities as educational technological aids, applicable not only to the field of electricity and electronics but also to a much wider range of technical areas.

Keywords: educational technology, Google model viewer, ICT educational tools, interactive teaching, new tools for teaching

Procedia PDF Downloads 46
1109 High-pressure Crystallographic Characterization of f-block Element Complexes

Authors: Nicholas B. Beck, Thomas E. Albrecht-Schönzart

Abstract:

High-pressure results in decreases in the bond lengths of metal-ligand bonds, which has proven to be incredibly informative in uncovering differences in bonding between lanthanide and actinide complexes. The degree of f-electron contribution to the metal ligand bonds has been observed to increase under pressure by a far greater degree in the actinides than the lanthanides, as revealed by spectroscopic studies. However, the actual changes in bond lengths have yet to be quantified, although computationally predicted. By using high-pressure crystallographic techniques, crystal structures of lanthanide complexes have been obtained at pressures up to 5 GPa for both hard and soft-donor ligands. These studies have revealed some unpredicted changes in the coordination environment as well as provided experimental support to computational results

Keywords: crystallography, high-pressure, lanthanide, materials

Procedia PDF Downloads 74
1108 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 64
1107 The Executive Functioning Profile of Children and Adolescents with a Diagnosis of OCD: A Systematic Review and Meta-Analysis

Authors: Parker Townes, Aisouda Savadlou, Shoshana Weiss, Marina Jarenova, Suzzane Ferris, Dan Devoe, Russel Schachar, Scott Patten, Tomas Lange, Marlena Colasanto, Holly McGinn, Paul Arnold

Abstract:

Some research suggests obsessive-compulsive disorder (OCD) is associated with impaired executive functioning: higher-level mental processes involved in carrying out tasks and solving problems. Relevant literature was identified systematically through online databases. Meta-analyses were conducted for task performance metrics reported by at least two articles. Results were synthesized by the executive functioning domain measured through each performance metric. Heterogeneous literature was identified, typically involving few studies using consistent measures. From 29 included studies, analyses were conducted on 33 performance metrics from 12 tasks. Results suggest moderate associations of working memory (two out of five tasks presented significant findings), planning (one out of two tasks presented significant findings), and visuospatial abilities (one out of two tasks presented significant findings) with OCD in youth. There was inadequate literature or contradictory findings for other executive functioning domains. These findings suggest working memory, planning, and visuospatial abilities are impaired in pediatric OCD, with mixed results. More work is needed to identify the effect of age and sex on these results. Acknowledgment: This work was supported by the Alberta Innovates Translational Health Chair in Child and Youth Mental Health. The funders had no role in the design, conducting, writing, or decision to submit this article for publication.

Keywords: obsessive-compulsive disorder, neurocognition, executive functioning, adolescents, children

Procedia PDF Downloads 72
1106 Developing the Methods for the Study of Static and Dynamic Balance

Authors: K. Abuzayan, H. Alabed, J. Ezarrugh, M. Agila

Abstract:

Static and dynamic balance are essential in daily and sports life. Many factors have been identified as influencing static balance control. Therefore, the aim of this study was to apply the (XCoM) method and other relevant variables (CoP, CoM, Fh, KE, P, Q, and, AI) to investigate sport related activities such as hopping and jumping. Many studies have represented the CoP data without mentioning its accuracy, so several experiments were done to establish the agreement between the CoP and the projected CoM in a static condition. Five male healthy (Mean ± SD:- age 24.6 years ±4.5, height 177 cm ± 6.3, body mass 72.8 kg ± 6.6) participated in this study. Results found that The implementation of the XCoM method was found to be practical for evaluating both static and dynamic balance. The general findings were that the CoP, the CoM, the XCoM, Fh, and Q were more informative than the other variables (e.g. KE, P, and AI) during static and dynamic balance. The XCoM method was found to be applicable to dynamic balance as well as static balance.

Keywords: centre of mass, static balance, dynamic balance, extrapolated centre of mass

Procedia PDF Downloads 400
1105 Analyzing the Efficiency of Initiatives Taken against Disinformation during Election Campaigns: Case Study of Young Voters

Authors: Fatima-Zohra Ghedir

Abstract:

Social media platforms have been actively working on solutions and combined their efforts with media, policy makers, educators and researchers to protect citizens and prevent interferences in information, political discourses and elections. Facebook, for instance, deleted fake accounts, implemented fake accounts and fake content detection algorithms, partnered with news agencies to manually fact check content and changed its newsfeeds display. Twitter and Instagram regularly communicate on their efforts and notify their users of improvements and safety guidelines. More funds have been allocated to media literacy programs to empower citizens in prevision of the coming elections. This paper investigates the efficiency of these initiatives and analyzes the metrics to measure their success or failure. The objective is also to determine the segments of population more prone to fall in disinformation traps during the elections despite the measures taken over the last four years. This study will also examine the groups who were positively impacted by these measures. This paper relies on both desk and field methodologies. For this study, a survey was administered to French students aged between 17 and 29 years old. Semi-guided interviews were conducted on a similar audience. The analysis of the survey and of the interviews show that respondents were exposed to the initiatives described above and are aware of the existence of disinformation issues. However, they do not understand what disinformation really entails or means. For instance, for most of them, disinformation is synonymous of the opposite point of view without taking into account the truthfulness of the content. Besides, they still consume and believe the information shared by their friends and family, with little questioning about the ways their closed ones get informed.

Keywords: democratic elections, disinformation, foreign interference, social media, success metrics

Procedia PDF Downloads 87
1104 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise

Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke

Abstract:

Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.

Keywords: BSR, noise, correlation, regression

Procedia PDF Downloads 53
1103 Communication Development for Development Communication: Prospects and Challenges of New Media Technologies in South East Zone, Nigeria

Authors: O. I. Ekwueme

Abstract:

New media technologies are noted for their immense contributions in various sectors of the economy which are believed to have resulted in the development of European countries. There is an assumption that we cannot have development communication without communication development, but we are not sure if new media technologies contribute to development in the South-East zone, Nigeria. The study employed mixed method and discovered that new media technologies have a very minimal relationship to development in the South-East zone, Nigeria. It was discovered that the media report on development news is basically informative instead of interactive. The South-East zone is scarcely covered unlike other zones. It argued that the communication technologies introduced in Nigeria was as a result of their struggle for independence. It was recommended that media organisations in the South-East zone should give adequate coverage to the zone, and be more interactive.

Keywords: communication, development, new media, technologies

Procedia PDF Downloads 304
1102 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model

Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li

Abstract:

Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.

Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model

Procedia PDF Downloads 119
1101 Exploring the Impact of Additive Manufacturing on Supply Chains: A Game-Theoretic Analysis of Manufacturer-Retailer Dynamics

Authors: Mohammad Ebrahim Arbabian

Abstract:

This paper investigates the impact of 3D printing, also known as additive manufacturing, on a multi-item supply chain comprising a manufacturer and retailer. Operating under a wholesale-price contract and catering to stochastic customer demand, this study delves into the largely unexplored realm of how 3D printing technology reshapes supply chain dynamics. A distinguishing aspect of 3D printing is its versatility in producing various product types, yet its slower production pace compared to traditional methods poses a challenge. We analyze the trade-off between 3D printing's limited capacity and its enhancement of production flexibility. By delineating the economic circumstances favoring 3D printing adoption by the manufacturer, we establish the Stackelberg equilibrium in the retailer-manufacturer game. Additionally, we determine optimal order quantities for the retailer considering 3D printing as an option for the manufacturer, ascertain optimal wholesale prices in the presence of 3D printing, and compute optimal profits for both parties involved in the supply chain.

Keywords: additive manufacturing, supply chain management, contract theory, Stackelberg game, optimization

Procedia PDF Downloads 28
1100 Path Integrals and Effective Field Theory of Large Scale Structure

Authors: Revant Nayar

Abstract:

In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.

Keywords: quantum field theory, cosmology, effective field theory, renormallisation

Procedia PDF Downloads 110
1099 Effect of Thermal Radiation and Chemical Reaction on MHD Flow of Blood in Stretching Permeable Vessel

Authors: Binyam Teferi

Abstract:

In this paper, a theoretical analysis of blood flow in the presence of thermal radiation and chemical reaction under the influence of time dependent magnetic field intensity has been studied. The unsteady non linear partial differential equations of blood flow considers time dependent stretching velocity, the energy equation also accounts time dependent temperature of vessel wall, and concentration equation includes time dependent blood concentration. The governing non linear partial differential equations of motion, energy, and concentration are converted into ordinary differential equations using similarity transformations solved numerically by applying ode45. MATLAB code is used to analyze theoretical facts. The effect of physical parameters viz., permeability parameter, unsteadiness parameter, Prandtl number, Hartmann number, thermal radiation parameter, chemical reaction parameter, and Schmidt number on flow variables viz., velocity of blood flow in the vessel, temperature and concentration of blood has been analyzed and discussed graphically. From the simulation study, the following important results are obtained: velocity of blood flow increases with both increment of permeability and unsteadiness parameter. Temperature of the blood increases in vessel wall as Prandtl number and Hartmann number increases. Concentration of the blood decreases as time dependent chemical reaction parameter and Schmidt number increases.

Keywords: stretching velocity, similarity transformations, time dependent magnetic field intensity, thermal radiation, chemical reaction

Procedia PDF Downloads 62
1098 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 183
1097 A Recommender System Fusing Collaborative Filtering and User’s Review Mining

Authors: Seulbi Choi, Hyunchul Ahn

Abstract:

Collaborative filtering (CF) algorithm has been popularly used for recommender systems in both academic and practical applications. It basically generates recommendation results using users’ numeric ratings. However, the additional use of the information other than user ratings may lead to better accuracy of CF. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's review can be regarded as the new informative source for identifying user's preference with accuracy. Under this background, this study presents a hybrid recommender system that fuses CF and user's review mining. Our system adopts conventional memory-based CF, but it is designed to use both user’s numeric ratings and his/her text reviews on the items when calculating similarities between users.

Keywords: Recommender system, Collaborative filtering, Text mining, Review mining

Procedia PDF Downloads 321
1096 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis

Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei

Abstract:

Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.

Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE

Procedia PDF Downloads 319
1095 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 43
1094 Generating Product Description with Generative Pre-Trained Transformer 2

Authors: Minh-Thuan Nguyen, Phuong-Thai Nguyen, Van-Vinh Nguyen, Quang-Minh Nguyen

Abstract:

Research on automatically generating descriptions for e-commerce products is gaining increasing attention in recent years. However, the generated descriptions of their systems are often less informative and attractive because of lacking training datasets or the limitation of these approaches, which often use templates or statistical methods. In this paper, we explore a method to generate production descriptions by using the GPT-2 model. In addition, we apply text paraphrasing and task-adaptive pretraining techniques to improve the qualify of descriptions generated from the GPT-2 model. Experiment results show that our models outperform the baseline model through automatic evaluation and human evaluation. Especially, our methods achieve a promising result not only on the seen test set but also in the unseen test set.

Keywords: GPT-2, product description, transformer, task-adaptive, language model, pretraining

Procedia PDF Downloads 174
1093 Formative Assessment of Creative Thinking Skills Embedded in Learning Through Play

Authors: Yigal Rosen, Garrett Jaeger, Michelle Newstadt, Ilia Rushkin, Sara Bakken

Abstract:

All children are capable of advancing their creative thinking skills and engaging in creative play. Creative play puts children in charge of exploring ideas, relationships, spaces and problems. Supported by The LEGO Foundation, the creative thinking formative assessment is designed to provide valid, reliable and informative measurement to support the development of creative skills while children are engaged in Learning through Play. In this paper we provide an overview of the assessment framework underpinned the assessment of creative thinking and report the results from the 2022 pilot study demonstrating promising evidence on the ability to measure creative skills in a conceptually and ecologically valid way to inform the development of creative skills.

Keywords: creativity, creative thinking, assessment, learning through play, creative play, learning progressions

Procedia PDF Downloads 100
1092 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 124