Search results for: efficient features selection
1150 Predicting Daily Patient Hospital Visits Using Machine Learning
Authors: Shreya Goyal
Abstract:
The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.Keywords: machine learning, SVM, HIPAA, data
Procedia PDF Downloads 651149 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey
Authors: Rania S. Hussein
Abstract:
Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.Keywords: levels of a product, popularity, travel, website evolution
Procedia PDF Downloads 3201148 The Research of Hand-Grip Strength for Adults with Intellectual Disability
Authors: Haiu-Lan Chin, Yu-Fen Hsiao, Hua-Ying Chuang, Wei Lee
Abstract:
An adult with intellectual disability generally has insufficient physical activity which is an important factor leading to premature weakness. Studies in recent years on frailty syndrome have accumulated substantial data about indicators of human aging, including unintentional weight loss, self-reported exhaustion, weakness, slow walking speed, and low physical activity. Of these indicators, hand-grip strength can be seen as a predictor of mortality, disability, complications, and increased length of hospital stay. Hand-grip strength in fact provides a comprehensive overview of one’s vitality. The research is about the investigation on hand-grip strength of adults with intellectual disabilities in facilities, institutions and workshops. The participants are 197 male adults (M=39.09±12.85 years old), and 114 female ones (M=35.80±8.2 years old) so far. The aim of the study is to figure out the performance of their hand-grip strength, and initiate the setting of training on hand-grip strength in their daily life which will decrease the weakening on their physical condition. Test items include weight, bone density, basal metabolic rate (BMR), static body balance except hand-grip strength. Hand-grip strength was measured by a hand dynamometer and classified as normal group ( ≧ 30 kg for male and ≧ 20 kg for female) and weak group ( < 30 kg for male, < 20 kg for female)The analysis includes descriptive statistics, and the indicators of grip strength fo the adults with intellectual disability. Though the research is still ongoing and the participants are increasing, the data indicates: (1) The correlation between hand-grip strength and degree of the intellectual disability (p ≦. 001), basal metabolic rate (p ≦ .001), and static body balance (p ≦ .01) as well. Nevertheless, there is no significant correlation between grip strength and basal metabolic rate which had been having significant correlation with hand-grip strength. (2) The difference between male and female subjects in hand-grip strength is significant, the hand-grip strength of male subjects (25.70±12.81 Kg) is much higher than female ones (16.30±8.89 Kg). Compared to the female counterparts, male participants indicate greater individual differences. And the proportion of weakness between male and female subjects is also different. (3) The regression indicates the main factors related to grip strength performance include degree of the intellectual disability, height, static body balance, training and weight sequentially. (4) There is significant difference on both hand-grip and static body balance between participants in facilities and workshops. The study supports the truth about the sex and gender differences in health. Nevertheless, the average hand-grip strength of left hand is higher than right hand in both male and female subjects. Moreover, 71.3% of male subjects and 64.2% of female subjects have better performance in their left hand-grip which is distinctive features especially in low degree of the intellectual disability.Keywords: adult with intellectual disability, frailty syndrome, grip strength, physical condition
Procedia PDF Downloads 1771147 Flipped Learning in Interpreter Training: Technologies, Activities and Student Perceptions
Authors: Dohun Kim
Abstract:
Technological innovations have stimulated flipped learning in many disciplines, including language teaching. It is a specific type of blended learning, which combines onsite (i.e. face-to-face) with online experiences to produce effective, efficient and flexible learning. Flipped learning literally ‘flips’ conventional teaching and learning activities upside down: it leverages technologies to deliver a lecture and direct instruction—other asynchronous activities as well—outside the classroom to reserve onsite time for interaction and activities in the upper cognitive realms: applying, analysing, evaluating and creating. Unlike the conventional flipped approaches, which focused on video lecture, followed by face-to-face or on-site session, new innovative methods incorporate various means and structures to serve the needs of different academic disciplines and classrooms. In the light of such innovations, this study adopted ‘student-engaged’ approaches to interpreter training and contrasts them with traditional classrooms. To this end, students were also encouraged to engage in asynchronous activities online, and innovative technologies, such as Telepresence, were employed. Based on the class implementation, a thorough examination was conducted to examine how we can structure and implement flipped classrooms for language and interpreting training while actively engaging learners. This study adopted a quantitative research method, while complementing it with a qualitative one. The key findings suggest that the significance of the instructor’s role does not dwindle, but his/her role changes to a moderator and a facilitator. Second, we can apply flipped learning to both theory- and practice-oriented modules. Third, students’ integration into the community of inquiry is of significant importance to foster active and higher-order learning. Fourth, cognitive presence and competence can be enhanced through strengthened and integrated teaching and social presences. Well-orchestrated teaching presence stimulates students to find out the problems and voices the convergences and divergences, while fluid social presence facilitates the exchanges of knowledge and the adjustment of solutions, which eventually contributes to consolidating cognitive presence—a key ingredient that enables the application and testing of the solutions and reflection thereon.Keywords: blended learning, Community of Inquiry, flipped learning, interpreter training, student-centred learning
Procedia PDF Downloads 1951146 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 2951145 Role of Community Youths in Conservation of Forests and Protected Areas of Bangladesh
Authors: Obaidul Fattah Tanvir, Zinat Ara Afroze
Abstract:
Community living adjacent to forests and Protected Areas, especially in South Asian countries, have a common practice in extracting resources for their living and livelihoods. This extraction of resources, because the way it is done, destroys the biophysical features of the area. Deforestation, wildlife poaching, illegal logging, unauthorized hill cutting etc. are some of the serious issues of concern for the sustainability of the natural resources that has a direct impact on environment and climate as a whole. To ensure community involvement in conservation initiatives of the state, community based forest management, commonly known as Comanagement, has been in practice in 6 South Asian countries. These are -India, Nepal, Sri Lanka, Pakistan, Bhutan and Bangladesh. Involving community in forestry management was initiated first in Bangladesh in 1979 and reached as an effective co-management approach through a several paradigm shifts. This idea of Comanagement has been institutionalized through a Government Order (GO) by the Ministry of Environment and Forests, Government of Bangladesh on November 23, 2009. This GO clearly defines the structure and functions of Co-management and its different bodies. Bangladesh Forest Department has been working in association with community to conserve and manage the Forests and Protected areas of Bangladesh following this legal document. Demographically young people constitute the largest segment of population in Bangladesh. This group, if properly sensitized, can produce valuable impacts on the conservation initiatives, both by community and government. This study traced the major factors that motivate community youths to work effectively with different tiers of comanagement organizations in conservation of forests and Protected Areas of Bangladesh. For the purpose of this study, 3 FGDs were conducted with 30 youths from the community living around the Protected Areas of Cox’s bazar, South East corner of Bangladesh, who are actively involved in Co-management organizations. KII were conducted with 5 key officials of Forest Department stationed at Cox’s Bazar. 2 FGDs were conducted with the representatives of 7 Co-management organizations working in Cox’s Bazar region and approaches of different community outreach activities conducted for forest conservation by 3 private organizations and Projects have been reviewed. Also secondary literatures were reviewed for the history and evolution of Co-management in Bangladesh and six South Asian countries. This study found that innovative community outreach activities that are financed by public and private sectors involving youths and community as a whole have played a pivotal role in conservation of forests and Protected Areas of the region. This approach can be replicated in other regions of Bangladesh as well as other countries of South Asia where Co-Management exists in practice.Keywords: community, co-management, conservation, forests, protected areas, youth
Procedia PDF Downloads 2801144 The Current Importance of the Rules of Civil Procedure in the Portuguese Legal Order: Between Legalism and Adequation
Authors: Guilherme Gomes, Jose Lebre de Freitas
Abstract:
The rules of Civil Procedure that are defined in the Portuguese Civil Procedure Code of 2013 particularly their articles 552 to 626- represent the model that the legislator thought that would be more suitable for national civil litigation, from the moment the action is brought by the plaintiff to the moment when the sentence is issued. However, procedural legalism is no longer a reality in the Portuguese Civil Procedural Law. According to the article 547 of the code of 2013, the civil judge has a duty to adopt the procedure that better suits the circumstances of the case, whether or not it is the one defined by law. The main goal of our paper is to answer the question whether the formal adequation imposed by this article diminishes the importance of the Portuguese rules of Civil Procedure and their daily application by national civil judges. We will start by explaining the appearance of the abovementioned rules in the Civil Procedure Code of 2013. Then we will analyse, using specific examples that were obtained by the books we read, how the legal procedure defined in the abovementioned code does not suit the circumstances of some specific cases and is totally inefficient in some situations. After that, we will, by using the data obtained in the practical research that we are conducting in the Portuguese civil courts within the scope of our Ph.D. thesis (until now, we have been able to consult 150 civil lawsuits), verify whether and how judges and parties make the procedure more efficient and effective in the case sub judice. In the scope of our research, we have already reached some preliminary findings: 1) despite the fact that the legal procedure does not suit the circumstances of some civil lawsuits, there are only two situations of frequent use of formal adequation (the judge allowing the plaintiff to respond to the procedural exceptions deduced in the written defense and the exemption from prior hearing for the judges who never summon it), 2) the other aspects of procedural adequation (anticipation of the production of expert evidence, waiving of oral argument at the final hearing, written allegations, dismissal of the dispatch on the controversial facts and the examination of witnesses at the domicile of one of the lawyers) are still little used and 3) formal adequation tends to happen by initiative of the judge, as plaintiffs and defendants are afraid of celebrating procedural agreements in most situations. In short, we can say that, in the Portuguese legal order of the 21st century, the flexibility of the legal procedure, as it is defined in the law and applied by procedural subjects, does not affect the importance of the rules of Civil Procedure of the code of 2013.Keywords: casuistic adequation, civil procedure code of 2013, procedural subjects, rules of civil procedure
Procedia PDF Downloads 1281143 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation
Authors: Hortènsia Curell, Ana Fernández-Montraveta
Abstract:
This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.Keywords: academic writing, attitude, corpus, english
Procedia PDF Downloads 721142 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 1201141 Energy Storage Modelling for Power System Reliability and Environmental Compliance
Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari
Abstract:
Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation
Procedia PDF Downloads 1301140 Strategic Metals and Rare Earth Elements Exploration of Lithium Cesium Tantalum Type Pegmatites: A Case Study from Northwest Himalayas
Authors: Auzair Mehmood, Mohammad Arif
Abstract:
The LCT (Li, Cs and Ta rich)-type pegmatites, genetically related to peraluminous S-type granites, are being mined for strategic metals (SMs) and rare earth elements (REEs) around the world. This study investigates the SMs and REEs potentials of pegmatites that are spatially associated with an S-type granitic suite of the Himalayan sequence, specifically Mansehra Granitic Complex (MGC), northwest Pakistan. Geochemical signatures of the pegmatites and some of their mineral extracts were analyzed using Inductive Coupled Plasma Mass Spectroscopy (ICP-MS) technique to explore and generate potential prospects (if any) for SMs and REEs. In general, the REE patterns of the studied whole-rock pegmatite samples show tetrad effect and possess low total REE abundances, strong positive Europium (Eu) anomalies, weak negative Cesium (Cs) anomalies and relative enrichment in heavy REE. Similar features have been observed on the REE patterns of the feldspar extracts. However, the REE patterns of the muscovite extracts reflect preferential enrichment and possess negative Eu anomalies. The trace element evaluation further suggests that the MGC pegmatites have undergone low levels of fractionation. Various trace elements concentrations (and their ratios) including Ta versus Cs, K/Rb (Potassium/Rubidium) versus Rb and Th/U (Thorium/Uranium) versus K/Cs, were used to analyze the economically viable mineral potential of the studied rocks. On most of the plots, concentrations fall below the dividing line and confer either barren or low-level mineralization potential of the studied rocks for both SMs and REEs. The results demonstrate paucity of the MGC pegmatites with respect to Ta-Nb (Tantalum-Niobium) mineralization, which is in sharp contrast to many Pan-African S-type granites around the world. The MGC pegmatites are classified as muscovite pegmatites based on their K/Rb versus Cs relationship. This classification is consistent with the occurrence of rare accessory minerals like garnet, biotite, tourmaline, and beryl. Furthermore, the classification corroborates with an earlier sorting of the MCG pegmatites into muscovite-bearing, biotite-bearing, and subordinate muscovite-biotite types. These types of pegmatites lack any significant SMs and REEs mineralization potentials. Field relations, such as close spatial association with parent granitic rocks and absence of internal zonation structure, also reflect the barren character and hence lack of any potential prospects of the MGC pegmatites.Keywords: exploration, fractionation, Himalayas, pegmatites, rare earth elements
Procedia PDF Downloads 2031139 AI Predictive Modeling of Excited State Dynamics in OPV Materials
Authors: Pranav Gunhal., Krish Jhurani
Abstract:
This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling
Procedia PDF Downloads 1161138 Using Corpora in Semantic Studies of English Adjectives
Authors: Oxana Lukoshus
Abstract:
The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies
Procedia PDF Downloads 3131137 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 1141136 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 1731135 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 1651134 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation
Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla
Abstract:
Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol
Procedia PDF Downloads 3391133 Exhaled Breath Condensate in Lung Cancer: A Non-Invasive Sample for Easier Mutations Detection by Next Generation Sequencing
Authors: Omar Youssef, Aija Knuuttila, Paivi Piirilä, Virinder Sarhadi, Sakari Knuutila
Abstract:
Exhaled breath condensate (EBC) is a unique sample that allows studying different genetic changes in lung carcinoma through a non-invasive way. With the aid of next generation sequencing (NGS) technology, analysis of genetic mutations has been more efficient with increased sensitivity for detection of genetic variants. In order to investigate the possibility of applying this method for cancer diagnostics, mutations in EBC DNA from lung cancer patients and healthy individuals were studied by using NGS. The key aim is to assess the feasibility of using this approach to detect clinically important mutations in EBC. EBC was collected from 20 healthy individuals and 9 lung cancer patients (four lung adenocarcinomas, four 8 squamous cell carcinoma, and one case of mesothelioma). Mutations in hotpot regions of 22 genes were studied by using Ampliseq Colon and Lung cancer panel and sequenced on Ion PGM. Results demonstrated that all nine patients showed a total of 19 cosmic mutations in APC, BRAF, EGFR, ERBB4, FBXW7, FGFR1, KRAS, MAP2K1, NRAS, PIK3CA, PTEN, RET, SMAD4, and TP53. In controls, 15 individuals showed 35 cosmic mutations in BRAF, CTNNB1, DDR2, EGFR, ERBB2, FBXW7, FGFR3, KRAS, MET, NOTCH1, NRAS, PIK3CA, PTEN, SMAD4, and TP53. Additionally, 45 novel mutations not reported previously were also seen in patients’ samples, and 106 novel mutations were seen in controls’ specimens. KRAS exon 2 mutations G12D was identified in one control specimen with mutant allele fraction of 6.8%, while KRAS G13D mutation seen in one patient sample showed mutant allele fraction of 17%. These findings illustrate that hotspot mutations are present in DNA from EBC of both cancer patients and healthy controls. As some of the cosmic mutations were seen in controls too, no firm conclusion can be drawn on the clinical importance of cosmic mutations in patients. Mutations reported in controls could represent early neoplastic changes or normal homeostatic process of apoptosis occurring in lung tissue to get rid of mutant cells. At the same time, mutations detected in patients might represent a non-invasive easily accessible way for early cancer detection. Follow up of individuals with important cancer mutations is necessary to clarify the significance of these mutations in both healthy individuals and cancer patients.Keywords: exhaled breath condensate, lung cancer, mutations, next generation sequencing
Procedia PDF Downloads 1751132 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 1941131 Mathematical Modeling for Continuous Reactive Extrusion of Poly Lactic Acid Formation by Ring Opening Polymerization Considering Metal/Organic Catalyst and Alternative Energies
Authors: Satya P. Dubey, Hrushikesh A Abhyankar, Veronica Marchante, James L. Brighton, Björn Bergmann
Abstract:
Aims: To develop a mathematical model that simulates the ROP of PLA taking into account the effect of alternative energy to be implemented in a continuous reactive extrusion production process of PLA. Introduction: The production of large amount of waste is one of the major challenges at the present time, and polymers represent 70% of global waste. PLA has emerged as a promising polymer as it is compostable, biodegradable thermoplastic polymer made from renewable sources. However, the main limitation for the application of PLA is the traces of toxic metal catalyst in the final product. Thus, a safe and efficient production process needs to be developed to avoid the potential hazards and toxicity. It has been found that alternative energy sources (LASER, ultrasounds, microwaves) could be a prominent option to facilitate the ROP of PLA via continuous reactive extrusion. This process may result in complete extraction of the metal catalysts and facilitate less active organic catalysts. Methodology: Initial investigation were performed using the data available in literature for the reaction mechanism of ROP of PLA based on conventional metal catalyst stannous octoate. A mathematical model has been developed by considering significant parameters such as different initial concentration ratio of catalyst, co-catalyst and impurity. Effects of temperature variation and alternative energies have been implemented in the model. Results: The validation of the mathematical model has been made by using data from literature as well as actual experiments. Validation of the model including alternative energies is in progress based on experimental data for partners of the InnoREX project consortium. Conclusion: The model developed reproduces accurately the polymerisation reaction when applying alternative energy. Alternative energies have a great positive effect to increase the conversion and molecular weight of the PLA. This model could be very useful tool to complement Ludovic® software to predict the large scale production process when using reactive extrusion.Keywords: polymer, poly-lactic acid (PLA), ring opening polymerization (ROP), metal-catalyst, bio-degradable, renewable source, alternative energy (AE)
Procedia PDF Downloads 3611130 Isolation, Identification and Measurement of Cottonseed Oil Gossypol in the Treatment of Drug-Resistant Cutaneous Leishmaniasis
Authors: Sara Taghdisi, Mehrosadat Mirmohammadi, Mostafa Mokhtarian, Mohammad Hossein Pazandeh
Abstract:
Leishmaniasis is one of the 10 most important diseases of the World Health Organization with health problems in more than 90 countries. Over one billion people are at risk of these diseases on almost every continent. The present human study was performed to evaluate the therapeutic effect of cotton plant on cutaneous leishmaniasis leision. firstly, the cotton seeds were cleaned and grinded to smaller particles. In the second step, the seeds were oiled by cold press method. In order to separate bioactive compound, after saponification of the oil, its gossypol was hydrolyzed and crystalized. finally, the therapeutic effect of Cottonseed Oil on cutaneous leishmaniasis was investigated. In the current project, Gossypol was extracted with a liquid-liquid extraction method in 120 minutes in the presence of Phosphoric acid from the cotton seed oil of Golestan beach varieties, then got crystallized in darkness using Acetic acid and isolated as Gossypol Acetic acid. The efficiency of the extracted crystal was obtained at 1.28±0.12. the cotton plant could be efficient in the treatment of Cutaneous leishmaniasis. This double-blind randomized controlled clinical trial was performed on 88 cases of leishmaniasis wounds. Patients were randomly divided into two groups of 44 cases. two groups received conventional treatment. In addition to the usual treatment (glucantime), the first group received cottonseed oil and the control group received placebo. The results of the present study showed that the surface of lesion before the intervention and in the first to fourth weeks after the intervention was not significantly different between the two groups (P-value> 0.05). But the surface of lesion in the Intervention group in the eighth and twelfth weeks was lower than the control group (P-value <0.05). This study showed that the improvement of leishmaniasis lesion using topical cotton plant mark in the eighth and twelfth weeks after the intervention was significantly more than the control group. Considering the most common chemical drugs for Cutaneous leishmaniasis treatment are sodium stibogluconate, and meglumine antimonate, which not only have relatively many side effects, but also some species of the Leishmania genus have become resistant to them. Therefore, a plant base bioactive compound such as cottonseed oil can be useful whit fewer side effects.Keywords: cottonseed oil, crystallization, gossypol, leishmaniasis
Procedia PDF Downloads 591129 Phytoremediation of Heavy Metals by the Perennial Tussock Chrysopogon Zizanioides Grown on Zn and Cd Contaminated Soil Amended with Biochar
Authors: Dhritilekha Deka, Deepak Patwa, Ravi K., Archana M. Nair
Abstract:
Bioaccumulation of heavy metal contaminants due to intense anthropogenic interference degrades the environment and ecosystem functions. Conventional physicochemical methods involve energy-intensive and costly methodologies. Phytoremediation, on the other hand, provides an efficient nature-based strategy for the reclamation of heavy metal-contaminated sites. However, the slow process and adaptation to high-concentration contaminant sequestration often limit the efficiency of the method. This necessitates natural amendments such as biochar to improve phytoextraction and stabilize the green cover. Biochar is a highly porous structure with high carbon sequestration potential and containing negatively charged functional groups that provide binding sites for the positively charged metals. This study aims to develop and determine the synergy between sugarcane bagasse biochar content and phytoremediation. A 60-day pot experiment using perennial tussock vetiver grass (Chrysopogon zizanioides) was conducted for different biochar contents of 1%, 2%, and 4% for the removal of cadmium and zinc. A concentration of 500 ppm is maintained for the amended and unamended control (CK) samples. The survival rates of the plants, biomass production, and leaf area index were measured for the plant growth characteristics. Results indicate a visible change in the plant growth and the heavy metal concentration with the biochar content. The bioconcentration factor (BCF) in the plant improved significantly for the 4% biochar content by 57% in comparison to the control CK treatment in Cd-treated soils. The Zn soils indicated the highest reduction in the metal concentration by 50% in the 2% amended samples and an increase in the BCF in all the amended samples. The translocation from the rhizosphere to the shoots was low but not dependent on the amendment content and varied for each contaminant type. The root-to-shoot ratio indicates higher values compared to the control samples. The enhanced tolerance capacities can be attributed to the nutrients released by the biochar in the soil. The study reveals the high potential of biochar as a phytoremediation amendment, but its effect is dependent on the soil and heavy metal and accumulator species.Keywords: phytoextraction, biochar, heavy metals, chrysopogon zizanioides, bioaccumulation factor
Procedia PDF Downloads 631128 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1281127 Sustainability from Ecocity to Ecocampus: An Exploratory Study on Spanish Universities' Water Management
Authors: Leyla A. Sandoval Hamón, Fernando Casani
Abstract:
Sustainability has been integrated into the cities’ agenda due to the impact that they generate. The dimensions of greater proliferation of sustainability, which are taken as a reference, are economic, social and environmental. Thus, the decisions of management of the sustainable cities search a balance between these dimensions in order to provide environment-friendly alternatives. In this context, urban models (where water consumption, energy consumption, waste production, among others) that have emerged in harmony with the environment, are known as Ecocity. A similar model, but on a smaller scale, is ‘Ecocampus’ that is developed in universities (considered ‘small cities’ due to its complex structure). So, sustainable practices are being implemented in the management of university campus activities, following different relevant lines of work. The universities have a strategic role in society, and their activities can strengthen policies, strategies, and measures of sustainability, both internal and external to the organization. Because of their mission in knowledge creation and transfer, these institutions can promote and disseminate more advanced activities in sustainability. This model replica also implies challenges in the sustainable management of water, energy, waste, transportation, among others, inside the campus. The challenge that this paper focuses on is the water management, taking into account that the universities consume big amounts of this resource. The purpose of this paper is to analyze the sustainability experience, with emphasis on water management, of two different campuses belonging to two different Spanish universities - one urban campus in a historic city and the other a suburban campus in the outskirts of a large city. Both universities are in the top hundred of international rankings of sustainable universities. The methodology adopts a qualitative method based on the technique of in-depth interviews and focus-group discussions with administrative and academic staff of the ‘Ecocampus’ offices, the organizational units for sustainability management, from the two Spanish universities. The hypotheses indicate that sustainable policies in terms of water management are best in campuses without big green spaces and where the buildings are built or rebuilt with modern style. The sustainability efforts of the university are independent of the kind of (urban – suburban) campus but an important aspect to improve is the degree of awareness of the university community about water scarcity. In general, the paper suggests that higher institutions adapt their sustainability policies depending on the location and features of the campus and their engagement with the water conservation. Many Spanish universities have proposed policies, good practices, and measures of sustainability. In fact, some offices or centers of Ecocampus have been founded. The originality of this study is to learn from the different experiences of sustainability policies of universities.Keywords: ecocampus, ecocity, sustainability, water management
Procedia PDF Downloads 2211126 Incorporation of Noncanonical Amino Acids into Hard-to-Express Antibody Fragments: Expression and Characterization
Authors: Hana Hanaee-Ahvaz, Monika Cserjan-Puschmann, Christopher Tauer, Gerald Striedner
Abstract:
Incorporation of noncanonical amino acids (ncAA) into proteins has become an interesting topic as proteins featured with ncAAs offer a wide range of different applications. Nowadays, technologies and systems exist that allow for the site-specific introduction of ncAAs in vivo, but the efficient production of proteins modified this way is still a big challenge. This is especially true for 'hard-to-express' proteins where low yields are encountered even with the native sequence. In this study, site-specific incorporation of azido-ethoxy-carbonyl-Lysin (azk) into an anti-tumor-necrosis-factor-α-Fab (FTN2) was investigated. According to well-established parameters, possible site positions for ncAA incorporation were determined, and corresponding FTN2 genes were constructed. Each of the modified FTN2 variants has one amber codon for azk incorporated either in its heavy or light chain. The expression level for all variants produced was determined by ELISA, and all azk variants could be produced with a satisfactory yield in the range of 50-70% of the original FTN2 variant. In terms of expression yield, neither the azk incorporation position nor the subunit modified (heavy or light chain) had a significant effect. We confirmed correct protein processing and azk incorporation by mass spectrometry analysis, and antigen-antibody interaction was determined by surface plasmon resonance analysis. The next step is to characterize the effect of azk incorporation on protein stability and aggregation tendency via differential scanning calorimetry and light scattering, respectively. In summary, the incorporation of ncAA into our Fab candidate FTN2 worked better than expected. The quantities produced allowed a detailed characterization of the variants in terms of their properties, and we can now turn our attention to potential applications. By using click chemistry, we can equip the Fabs with additional functionalities and make them suitable for a wide range of applications. We will now use this option in a first approach and develop an assay that will allow us to follow the degradation of the recombinant target protein in vivo. Special focus will be laid on the proteolytic activity in the periplasm and how it is influenced by cultivation/induction conditions.Keywords: degradation, FTN2, hard-to-express protein, non-canonical amino acids
Procedia PDF Downloads 2301125 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study
Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly
Abstract:
In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions
Procedia PDF Downloads 811124 Laboratory Diagnostic Testing of Peste des Petits Ruminants in Georgia
Authors: Nino G. Vepkhvadze, Tea Enukidze
Abstract:
Every year the number of countries around the world face the risk of the spread of infectious diseases that bring significant ecological and social-economic damage. Hence, the importance of food product safety is emphasized that is the issue of interest for many countries. To solve them, it’s necessary to conduct preventive measures against the diseases, have accurate diagnostic results, leadership, and management. The Peste des petits ruminants (PPR) disease is caused by a morbillivirus closely related to the rinderpest virus. PPR is a transboundary disease as it emerges and evolves, considered as one of the top most damaging animal diseases. The disease imposed a serious threat to sheep-breeding when the farms of sheep, goats are significantly growing within the country. In January 2016, PPR was detected in Georgia. Up to present the origin of the virus, the age relationship of affected ruminants and the distribution of PPRV in Georgia remains unclear. Due to the nature of PPR, and breeding practices in the country, reemerging of the disease in Georgia is highly likely. The purpose of the studies is to provide laboratories with efficient tools allowing the early detection of PPR emergence and re-emergences. This study is being accomplished under the Biological Threat Reduction Program project with the support of the Defense Threat Reduction Agency (DTRA). The purpose of the studies is to investigate the samples and identify areas at high risk of the disease. Georgia has a high density of small ruminant herds bred as free-ranging, close to international borders. Kakheti region, Eastern Georgia, will be considered as area of high priority for PPR surveillance. For this reason, in 2019, in Kakheti region investigated n=484 sheep and goat serum and blood samples from the same animals, utilized serology and molecular biology methods. All samples were negative by RT-PCR, and n=6 sheep samples were seropositive by ELISA-Ab. Future efforts will be concentrated in areas where the risk of PPR might be high such as international bordering regions of Georgia. For diagnostics, it is important to integrate the PPRV knowledge with epidemiological data. Based on these diagnostics, the relevant agencies will be able to control the disease surveillance.Keywords: animal disease, especially dangerous pathogen, laboratory diagnostics, virus
Procedia PDF Downloads 1141123 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 1571122 The Effects of Total Resistance Exercises Suspension Exercises Program on Physical Performance in Healthy Individuals
Authors: P. Cavlan, B. Kırmızıgil
Abstract:
Introduction: Each exercise in suspension exercises offer the use of gravity and body weight; and is thought to develop the equilibrium, flexibility and body stability necessary for daily life activities and sports, in addition to creating the correct functional force. Suspension exercises based on body weight focus the human body as an integrated system. Total Resistance Exercises (TRX) suspension training that physiotherapists, athletic health clinics, exercise centers of hospitals and chiropractic clinics now use for rehabilitation purposes. The purpose of this study is to investigate and compare the effects of TRX suspension exercises on physical performance in healthy individuals. Method: Healthy subjects divided into two groups; the study group and the control group with 40 individuals for each, between ages 20 to 45 with similar gender distributions. Study group had 2 sessions of suspension exercises per week for 8 weeks and control group had no exercises during this period. All the participants were given explosive strength, flexibility, strength and endurance tests before and after the 8 week period. The tests used for evaluation were respectively; standing long jump test and single leg (left and right) long jump tests, sit and reach test, sit up and back extension tests. Results: In the study group a statistically significant difference was found between prior- and final-tests in all evaluations, including explosive strength, flexibility, core strength and endurance of the group performing TRX exercises. These values were higher than the control groups’ values. The final test results were found to be statistically different between the study and control groups. Study group showed development in all values. Conclusions: In this study, which was conducted with the aim of investigating and comparing the effects of TRX suspension exercises on physical performance, the results of the prior-tests of both groups were similar. There was no significant difference between the prior and the final values in the control group. It was observed that in the study group, explosive strength, flexibility, strength, and endurance development was achieved after 8 weeks. According to these results, it was shown that TRX suspension exercise program improved explosive strength, flexibility, especially core strength and endurance; therefore the physical performance. Based on the results of our study, it was determined that the physical performance, an indispensable requirement of our life, was developed by the TRX suspension system. We concluded that TRX suspension exercises can be used to improve the explosive strength and flexibility in healthy individuals, as well as developing the muscle strength and endurance of the core region. The specific investigations could be done in this area so that programs that emphasize the TRX's physical performance features could be created.Keywords: core strength, endurance, explosive strength, flexibility, physical performance, suspension exercises
Procedia PDF Downloads 1671121 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters
Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić
Abstract:
The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water
Procedia PDF Downloads 100