Search results for: search algorithms
166 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island
Authors: J. A. Sawicki, C. Ebrahimi
Abstract:
The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra
Procedia PDF Downloads 151165 Mobile Learning in Developing Countries: A Synthesis of the Past to Define the Future
Authors: Harriet Koshie Lamptey, Richard Boateng
Abstract:
Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. Steady progress in wireless technologies and the portability of communication devices continue to broaden the scope and use of mobiles. With the convergence of Web functionality onto mobile platforms and the affordability and availability of mobile technology, m-learning has the potential of being the next prevalent channel of education in both formal and informal settings. There is substantive literature on developed countries but the state in developing countries (DCs) however appears vague. This paper is a synthesis of extant literature on mobile learning in DCs. The research interest is based on the fact that in DCs, mobile communication and internet connectivity are popular. However, its use in education is under explored. There are some reviews on the state, conceptualizations, trends and teacher education, but to the authors’ knowledge, no study has focused on mobile learning adoption and integration issues. This study examines issues and gaps associated with its adoption and integration in DCs higher education institutions. A qualitative build-up of literature was conducted using articles pooled from electronic databases (Google Scholar and ERIC). To enable criteria for inclusion and incorporate diverse study perspectives, search terms used were m-learning, DCs, higher education institutions, challenges, benefits, impact, gaps and issues. The synthesis revealed that though mobile technology has diffused globally, its pedagogical pursuit in DCs remains quite low. The absence of a mobile Web and the difficulty of resource conversion into mobile format due to lack of funding and technical competence is a stumbling block. Again, the lack of established design and implementation rules to guide the development of m-learning platforms in DCs is a hindrance. The absence of access restrictions on devices poses security threats to institutional systems. Negative perceptions that devices are taking over faculty roles lead to resistance in some situations. Resistance to change can be a hindrance to the acceptance and success of new systems. Lack of interest for m-learning is also attributed to lower technological literacy levels of the underprivileged masses. Scholarly works on m-learning in DCs is yet to mature. Most technological innovations are handed down from developed countries, and this constantly creates a lag for DCs. Lack of theoretical grounding was also identified which reduces the objectivity of study reports. The socio-cultural terrain of DCs results in societies with different views and needs that have been identified as a hindrance to research. Institutional commitment decisions, adequate funding for the necessary infrastructural development as well as multiple stakeholder participation is important for project success. Evidence suggests that while adoption decisions are readily made, successful integration of the concept for its full benefits to be realized is often neglected. Recommendations to findings were made to provide possible remedies to identified issues.Keywords: developing countries, higher education institutions, mobile learning, literature review
Procedia PDF Downloads 225164 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 119163 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 80162 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification
Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho
Abstract:
The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers
Procedia PDF Downloads 15161 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 125160 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer
Authors: Ruairi J. McMillan
Abstract:
Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics
Procedia PDF Downloads 67159 Systematic Review of Dietary Fiber Characteristics Relevant to Appetite and Energy Intake Outcomes in Clinical Intervention Trials of Healthy Humans
Authors: K. S. Poutanen, P. Dussort, A. Erkner, S. Fiszman, K. Karnik, M. Kristensen, C. F. M. Marsaux, S. Miquel-Kergoat, S. Pentikäinen, P. Putz, R. E. Steinert, J. Slavin, D. J. Mela
Abstract:
Dietary fiber (DF) intake has been associated with lower body weight or less weight gain. These effects are generally attributed to putative effects of DF on appetite. Many intervention studies have tested the effect of DFs on appetite-related measures, with inconsistent results. However, DF includes a wide category of different compounds with diverse chemical and physical characteristics, and correspondingly diverse effects in human digestion. Thus, inconsistent results between DF consumption and appetite are not surprising. The specific contribution of different compounds with varying physico-chemical properties to appetite control and the mediating mechanisms are not well characterized. This systematic review aimed to assess the influence of specific DF characteristics, including viscosity, gel forming capacity, fermentability, and molecular weight, on appetite-related outcomes in healthy humans. Medline and FSTA databases were searched for controlled human intervention trials, testing the effects of well-characterized DFs on subjective satiety/appetite or energy intake outcomes. Studies were included only if they reported: 1) fiber name and origin, and 2) data on viscosity, gelling properties, fermentability, or molecular weight of the DF materials tested. The search generated 3001 unique records, 322 of which were selected for further consideration from title and abstract screening. Of these, 149 were excluded due to insufficient fiber characterization and 124 for other reasons (not original article, not randomized controlled trial, or no appetite related outcome), leaving 49 papers meeting all the inclusion criteria, most of which reported results from acute testing (<1 day). The eligible 49 papers described 90 comparisons of DFs in foods, beverages or supplements. DF-containing material of interest was efficacious for at least one appetite-related outcome in 51/90 comparisons. Gel-forming DF sources were most consistently efficacious but there were no clear associations between viscosity, MW or fermentability and appetite-related outcomes. A considerable number of papers had to be excluded from the review due to shortcomings in fiber characterization. To build understanding about the impact of DF on satiety/appetite specifically there should be clear hypotheses about the mechanisms behind the proposed beneficial effect of DF material on appetite, and sufficient data about the DF properties relevant for the hypothesized mechanisms to justify clinical testing. The hypothesized mechanisms should also guide the decision about relevant duration of exposure in studies, i.e. are the effects expected to occur during acute time frame (related to stomach emptying, digestion rate, etc.) or develop from sustained exposure (gut fermentation mediated mechanisms). More consistent measurement methods and reporting of fiber specifications and characterization are needed to establish reliable structure-function relationships for DF and health outcomes.Keywords: appetite, dietary fiber, physico-chemical properties, satiety
Procedia PDF Downloads 235158 Sustainable Urbanism: Model for Social Equity through Sustainable Development
Authors: Ruchira Das
Abstract:
The major Metropolises of India are resultant of Colonial manifestation of Production, Consumption and Sustenance. These cities grew, survived, and sustained on the basic whims of Colonial Power and Administrative Agendas. They were symbols of power, authority and administration. Within them some Colonial Towns remained as small towns within the close vicinity of the major metropolises and functioned as self–sufficient units until peripheral development due to tremendous pressure occurred in the metropolises. After independence huge expansion in Judiciary and Administration system resulted City Oriented Employment. A large number of people started residing within the city or within commutable distance of the city and it accelerated expansion of the cities. Since then Budgetary and Planning expenditure brought a new pace in Economic Activities. Investment in Industry and Agriculture sector generated opportunity of employment which further led towards urbanization. After two decades of Budgetary and Planning economic activities in India, a new era started in metropolitan expansion. Four major metropolises started further expansion rapidly towards its suburbs. A concept of large Metropolitan Area developed. Cities became nucleus of suburbs and rural areas. In most of the cases such expansion was not favorable to the relationship between City and its hinterland due to absence of visualization of Compact Sustainable Development. The search for solutions needs to weigh the choices between Rural and Urban based development initiatives. Policymakers need to focus on areas which will give the greatest impact. The impact of development initiatives will spread the significant benefit to all. There is an assumption that development integrates Economic, Social and Environmental considerations with equal weighing. The traditional narrower and almost exclusive focus on economic criteria as the determinant of the level of development is thus re–described and expanded. The Social and Environmental aspects are equally important as Economic aspect to achieve Sustainable Development. The arrangement of opportunities for Public, Semi – Public facilities for its citizen is very much relevant to development. It is responsibility of the administration to provide opportunities for the basic requirement of its inhabitants. Development should be in terms of both Industrial and Agricultural to maintain a balance between city and its hinterland. Thus, policy is to formulate shifting the emphasis away from Economic growth towards Sustainable Human Development. The goal of Policymaker should aim at creating environments in which people’s capabilities can be enhanced by the effective dynamic and adaptable policy. The poverty could not be eradicated simply by increasing income. The improvement of the condition of the people would have to lead to an expansion of basic human capabilities. In this scenario the suburbs/rural areas are considered as environmental burden to the metropolises. A new living has to be encouraged in the suburban or rural. We tend to segregate agriculture from the city and city life, this leads to over consumption, but this urbanism model attempts both these to co–exists and hence create an interesting overlapping of production and consumption network towards sustainable Rurbanism.Keywords: socio–economic progress, sustainability, social equity, urbanism
Procedia PDF Downloads 306157 The Relationship between 21st Century Digital Skills and the Intention to Start a Digit Entrepreneurship
Authors: Kathrin F. Schneider, Luis Xavier Unda Galarza
Abstract:
In our modern world, few are the areas that are not permeated by digitalization: we use digital tools for work, study, entertainment, and daily life. Since technology changes rapidly, skills must adapt to the new reality, which gives a dynamic dimension to the set of skills necessary for people's academic, professional, and personal success. The concept of 21st-century digital skills, which includes skills such as collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, interpersonal skills, creativity, and productivity, have been widely discussed in the literature. Digital transformation has opened many economic opportunities for entrepreneurs for the development of their products, financing possibilities, and product distribution. One of the biggest advantages is the reduction in cost for the entrepreneur, which has opened doors not only for the entrepreneur or the entrepreneurial team but also for corporations through intrapreneurship. The development of students' general literacy level and their digital competencies is crucial for improving the effectiveness and efficiency of the learning process, as well as for students' adaptation to the constantly changing labor market. The digital economy allows a free substantial increase in the supply share of conditional and also innovative products; this is mainly achieved through 5 ways to reduce costs according to the conventional digital economy: search costs, replication, transport, tracking, and verification. Digital entrepreneurship worldwide benefits from such achievements. There is an expansion and democratization of entrepreneurship thanks to the use of digital technologies. The digital transformation that has been taking place in recent years is more challenging for developing countries, as they have fewer resources available to carry out this transformation while offering all the necessary support in terms of cybersecurity and educating their people. The degree of digitization (use of digital technology) in a country and the levels of digital literacy of its people often depend on the economic level and situation of the country. Telefónica's Digital Life Index (TIDL) scores are strongly correlated with country wealth, reflecting the greater resources that richer countries can contribute to promoting "Digital Life". According to the Digitization Index, Ecuador is in the group of "emerging countries", while Chile, Colombia, Brazil, Argentina, and Uruguay are in the group of "countries in transition". According to Herrera Espinoza et al. (2022), there are startups or digital ventures in Ecuador, especially in certain niches, but many of the ventures do not exceed six months of creation because they arise out of necessity and not out of the opportunity. However, there is a lack of relevant research, especially empirical research, to have a clearer vision. Through a self-report questionnaire, the digital skills of students will be measured in an Ecuadorian private university, according to the skills identified as the six 21st-century skills. The results will be put to the test against the variable of the intention to start a digital venture measured using the theory of planned behavior (TPB). The main hypothesis is that high digital competence is positively correlated with the intention to start digital entrepreneurship.Keywords: new literacies, digital transformation, 21st century skills, theory of planned behavior, digital entrepreneurship
Procedia PDF Downloads 105156 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 378155 A Scoping Study and Stakeholder Consultation on Mental Health Determinants among Arab Immigrants and Refugees in North America
Authors: Sarah Elshahat, Tina Moffat
Abstract:
Suboptimal mental health is a considerable global public health challenge that leads to considerable inequalities worldwide. Newcomers are at elevated risk for developing mental health issues as a result of social exclusion, stigmatization, racism, unequal employment opportunities, and discrimination. The problem can be especially serious amongst Arabic-speaking immigrants and refugees (ASIR) whose mental wellness may have already been affected by exposure to political violence, persecution, hunger or war in their countries of origin. A scoping review was conducted to investigate pre- and post-migration mental health determinants amongst ASIR in North America (the U.S. and Canada), who are a rapidly growing population in both regions. Pertinent peer-reviewed papers and grey literature were located through a systematic search of five electronic databases (Medline, Embase, PsycINFO, Anthropology Plus, and Sociology Database). A stakeholder consultation was implemented to validate the analyzed findings of the included 44 studies. About 80% of the studies were carried out in the US, underscoring a lack of Canadian ASIR-mental health research. A gap in qualitative, mixed-method, and longitudinal research was detected, where approximately two-thirds of the studies adopted a cross-sectional method. Pre-migration determinants of mental health were related to the political unrest, violence and armed conflict in the Arab world, increasing post-traumatic stress disorder and psychological distress levels among ASIR. English language illiteracy and generational variations in acculturation patterns were major post-migration mental health triggering factors. Exposure to domestic violence, stigmatization, poverty, racialization, and harassment were significant post-migration mental health determinants that stem from social inequalities, triggering depression, and distress amongst ASIR. Family conflicts linked to child-rearing and gendered norms were considered as both pre- and post-migration mental health triggering factors. Most post-migration mental health protective factors were socio-culturally related and included the maintenance of positive ethnic identity, faith, family support, and community cohesion. Individual resilience, articulated as self-esteem and hope, was a significant negative predictor of depression and psychological distress among ASIR. Community-engaged, mixed-methods, and longitudinal studies are required to address the current gap in mental health research among ASIR in North America. A more thorough determination of potential mental health triggers and protective factors would help inform the development of mental wellness and resilience-promoting programs that are culturally sensitive to ASIR. On the policy level, the Health in All Policies framework of the World Health Organization can be potentially useful for addressing social and health inequalities among ASIR, reducing mental health challenges.Keywords: depression, post-traumatic stress disorder, psychological distress, resilience
Procedia PDF Downloads 136154 Physico-Mechanical Behavior of Indian Oil Shales
Authors: K. S. Rao, Ankesh Kumar
Abstract:
The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior
Procedia PDF Downloads 347153 Phenolic Acids of Plant Origin as Promising Compounds for Elaboration of Antiviral Drugs against Influenza
Authors: Vladimir Berezin, Aizhan Turmagambetova, Andrey Bogoyavlenskiy, Pavel Alexyuk, Madina Alexyuk, Irina Zaitceva, Nadezhda Sokolova
Abstract:
Introduction: Influenza viruses could infect approximately 5% to 10% of the global human population annually, resulting in serious social and economic damage. Vaccination and etiotropic antiviral drugs are used for the prevention and treatment of influenza. Vaccination is important; however, antiviral drugs represent the second line of defense against new emerging influenza virus strains for which vaccines may be unsuccessful. However, the significant drawback of commercial synthetic anti-flu drugs is the appearance of drug-resistant influenza virus strains. Therefore, the search and development of new anti-flu drugs efficient against drug-resistant strains is an important medical problem for today. The aim of this work was a study of four phenolic acids of plant origin (Gallic, Syringic, Vanillic, and Protocatechuic acids) as a possible tool for treatment against influenza virus. Methods: Phenolic acids; gallic, syringic, vanillic, and protocatechuic have been prepared by extraction from plant tissues and purified using high-performance liquid chromatography fractionation. Avian influenza virus, strain A/Tern/South Africa/1/1961 (H5N3) and human epidemic influenza virus, strain A/Almaty/8/98 (H3N2) resistant to commercial anti-flu drugs (Rimantadine, Oseltamivir) were used for testing antiviral activity. Viruses were grown in the allantoic cavity of 10 days old chicken embryos. The chemotherapeutic index (CTI), determined as the ratio of an average toxic concentration of the tested compound (TC₅₀) to the average effective virus-inhibition concentration (EC₅₀), has been used as a criteria of specific antiviral action. Results: The results of study have shown that the structure of phenolic acids significantly affected their ability to suppress the reproduction of tested influenza virus strains. The highest antiviral activity among tested phenolic acids was detected for gallic acid, which contains three hydroxyl groups in the molecule at C3, C4, and C5 positions. Antiviral activity of gallic acid against A/H5N3 and A/H3N2 influenza virus strains was higher than antiviral activity of Oseltamivir and Rimantadine. gallic acid inhibited almost 100% of the infection activity of both tested viruses. Protocatechuic acid, which possesses 2 hydroxyl groups (C3 and C4) have shown weaker antiviral activity in comparison with gallic acid and inhibited less than 10% of virus infection activity. Syringic acid, which contains two hydroxyl groups (C3 and C5), was able to suppress up to 12% of infection activity. Substitution of two hydroxyl groups by methoxy groups resulted in the complete loss of antiviral activity. Vanillic acid, which is different from protocatechuic acid by replacing of C3 hydroxyl group to methoxy group, was able to suppress about 30% of infection activity of tested influenza viruses. Conclusion: For pronounced antiviral activity, the molecular of phenolic acid must have at least two hydroxyl groups. Replacement of hydroxyl groups to methoxy group leads to a reduction of antiviral properties. Gallic acid demonstrated high antiviral activity against influenza viruses, including Rimantadine and Oseltamivir resistant strains, and could be used as a potential candidate for the development of antiviral drug against influenza virus.Keywords: antiviral activity, influenza virus, drug resistance, phenolic acids
Procedia PDF Downloads 141152 A Systematic Review of Antimicrobial Resistance in Fish and Poultry – Health and Environmental Implications for Animal Source Food Production in Egypt, Nigeria, and South Africa
Authors: Ekemini M. Okon, Reuben C. Okocha, Babatunde T. Adesina, Judith O. Ehigie, Babatunde M. Falana, Boluwape T. Okikiola
Abstract:
Antimicrobial resistance (AMR) has evolved to become a significant threat to global public health and food safety. The development of AMR in animals has been associated with antimicrobial overuse. In recent years, the number of antimicrobials used in food animals such as fish and poultry has escalated. It, therefore, becomes imperative to understand the patterns of AMR in fish and poultry and map out future directions for better surveillance efforts. This study used the Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) to assess the trend, patterns, and spatial distribution for AMR research in Egypt, Nigeria, and South Africa. A literature search was conducted through the Scopus and Web of Science databases in which published studies on AMR between 1989 and 2021 were assessed. A total of 172 articles were relevant for this study. The result showed progressive attention on AMR studies in fish and poultry from 2018 to 2021 across the selected countries. The period between 2018 (23 studies) and 2021 (25 studies) showed a significant increase in AMR publications with a peak in 2019 (28 studies). Egypt was the leading exponent of AMR research (43%, n=74) followed by Nigeria (40%, n=69), then South Africa (17%, n=29). AMR studies in fish received relatively little attention across countries. The majority of the AMR studies were on poultry in Egypt (82%, n=61), Nigeria (87%, n=60), and South Africa (83%, n=24). Further, most of the studies were on Escherichia and Salmonella species. Antimicrobials frequently researched were ampicillin, erythromycin, tetracycline, trimethoprim, chloramphenicol, and sulfamethoxazole groups. Multiple drug resistance was prevalent, as demonstrated by antimicrobial resistance patterns. In poultry, Escherichia coli isolates were resistant to cefotaxime, streptomycin, chloramphenicol, enrofloxacin, gentamycin, ciprofloxacin, oxytetracycline, kanamycin, nalidixic acid, tetracycline, trimethoprim/sulphamethoxazole, erythromycin, and ampicillin. Salmonella enterica serovars were resistant to tetracycline, trimethoprim/sulphamethoxazole, cefotaxime, and ampicillin. Staphylococcusaureus showed high-level resistance to streptomycin, kanamycin, erythromycin, cefoxitin, trimethoprim, vancomycin, ampicillin, and tetracycline. Campylobacter isolates were resistant to ceftriaxone, erythromycin, ciprofloxacin, tetracycline, and nalidixic acid at varying degrees. In fish, Enterococcus isolates showed resistance to penicillin, ampicillin, chloramphenicol, vancomycin, and tetracycline but sensitive to ciprofloxacin, erythromycin, and rifampicin. Isolated strains of Vibrio species showed sensitivity to florfenicol and ciprofloxacin, butresistance to trimethoprim/sulphamethoxazole and erythromycin. Isolates of Aeromonas and Pseudomonas species exhibited resistance to ampicillin and amoxicillin. Specifically, Aeromonashydrophila isolates showed sensitivity to cephradine, doxycycline, erythromycin, and florfenicol. However, resistance was also exhibited against augmentinandtetracycline. The findings constitute public and environmental health threats and suggest the need to promote and advance AMR research in other countries, particularly those on the global hotspot for antimicrobial use.Keywords: antibiotics, antimicrobial resistance, bacteria, environment, public health
Procedia PDF Downloads 199151 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 160150 Assessing Sexual and Reproductive Health Literacy and Engagement Among Refugee and Immigrant Women in Massachusetts: A Qualitative Community-Based Study
Authors: Leen Al Kassab, Sarah Johns, Helen Noble, Nawal Nour, Elizabeth Janiak, Sarrah Shahawy
Abstract:
Introduction: Immigrant and refugee women experience disparities in sexual and reproductive health (SRH) outcomes, partially as a result of barriers to SRH literacy and to regular healthcare access and engagement. Despite the existing data highlighting growing needs for culturally relevant and structurally competent care, interventions are scarce and not well-documented. Methods: In this IRB-approved study, we used a community-based participatory research approach, with the assistance of a community advisory board, to conduct a qualitative needs assessment of SRH knowledge and service engagement with immigrant and refugee women from Africa or the Middle East and currently residing in Boston. We conducted a total of nine focus group discussions (FGDs) in partnership with medical, community, and religious centers, in six languages: Arabic, English, French, Somali, Pashtu, and Dari. A total of 44 individuals participated. We explored migrant and refugee women’s current and evolving SRH care needs and gaps, specifically related to the development of interventions and clinical best practices targeting SRH literacy, healthcare engagement, and informed decision-making. Recordings of the FGDs were transcribed verbatim and translated by interpreter services. We used open coding with multiple coders who resolved discrepancies through consensus and iteratively refined our codebook while coding data in batches using Dedoose software. Results: Participants reported immigrant adaptation experiences, discrimination, and feelings of trust, autonomy, privacy, and connectedness to family, community, and the healthcare system as factors surrounding SRH knowledge and needs. The context of previously learned SRH knowledge was commonly noted to be in schools, at menstruation, before marriage, from family members, partners, friends, and online search engines. Common themes included empowering strength drawn from religious and cultural communities, difficulties bridging educational gaps with their US- born daughters, and a desire for more SRH education from multiple sources, including family, health care providers, and religious experts & communities. Regarding further SRH education, participants’ preferences varied regarding ideal platform (virtual vs. in-person), location (in religious and community centers or not), smaller group sizes, and the involvement of men. Conclusions: Based on these results, empowering SRH initiatives should include both community and religious center-based, as well as clinic-based, interventions. Interventions should be composed of frequent educational workshops in small groups involving age-grouped women, daughters, and (sometimes) men, tailored SRH messaging, and the promotion of culturally, religiously, and linguistically competent care.Keywords: community, immigrant, religion, sexual & reproductive health, women's health
Procedia PDF Downloads 127149 Nigerian Football System: Examining Micro-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport
Authors: Iorwase Derek Kaka’an, Peter Smolianov, Steven Dion, Christopher Schoen, Jaclyn Norberg, Charles Gabriel Iortimah
Abstract:
This study examines the current state of football in Nigeria to identify the country's practices, which could be useful internationally, and to determine areas for improvement. Over 200 sources of literature on sport delivery systems in successful sports nations were analyzed to construct a globally applicable model of elite football integrated with mass participation, comprising of the following three levels: macro (socio-economic, cultural, legislative, and organizational), meso (infrastructures, personnel, and services enabling sports programs) and micro level (operations, processes, and methodologies for the development of individual athletes). The model has received scholarly validation and has shown to be a framework for program analysis that is not culturally bound. It has recently been utilized for further understanding such sports systems as US rugby, tennis, soccer, swimming, and volleyball, as well as Dutch and Russian swimming. A questionnaire was developed using the above-mentioned model. Survey questions were validated by 12 experts including academicians, executives from sports governing bodies, football coaches, and administrators. To identify best practices and determine areas for improvement of football in Nigeria, 116 coaches completed the questionnaire. Useful exemplars and possible improvements were further identified through semi-structured discussions with 10 Nigerian football administrators and experts. Finally, a content analysis of the Nigeria Football Federation's website and organizational documentation was conducted. This paper focuses on the micro level of Nigerian football delivery, particularly talent search and development as well as advanced athlete preparation and support. Results suggested that Nigeria could share such progressive practices as the provision of football programs in all schools and full-time coaches paid by governments based on the level of coach education. Nigerian football administrators and coaches could provide better football services affordable for all, where success in mass and elite sports is guided by science focused on athletes' needs. Better implemented could be international best practices such as lifelong guidelines for health and excellence of everyone and integration of fitness tests into player development and ranking as done in best Dutch, English, French, Russian, Spanish, and other European clubs; integration of educational and competitive events for elite and developing athletes as well as fans as done at the 2018 World Cup Russia; and academies with multi-stage athlete nurturing as done by Ajax in Africa as well as Barcelona FC and other top clubs expanding across the world. The methodical integration of these practices into the balanced development of mass and elite football will help contribute to international sports success as well as national health, education, crime control, and social harmony in Nigeria.Keywords: football, high performance, mass participation, Nigeria, sport development
Procedia PDF Downloads 70148 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 162147 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 193146 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183145 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review
Authors: Zihan Geng, L. Quentin Dixon
Abstract:
It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.Keywords: attention, bilingual advantage, children, executive function
Procedia PDF Downloads 185144 Cultural Competence in Palliative Care
Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam
Abstract:
Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.Keywords: cultural competence, end-of-life care, hospice, palliative care
Procedia PDF Downloads 74143 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 267142 Effect of Renin Angiotensin Pathway Inhibition on the Efficacy of Anti-programmed Cell Death (PD-1/L-1) Inhibitors in Advanced Non-small Cell Lung Cancer Patients- Comparison of Single Hospital Retrospective Assessment to the Published Literature
Authors: Esther Friedlander, Philip Friedlander
Abstract:
The use of immunotherapy that inhibits programmed death-1 (PD-1) or its ligand PD-L1 confers survival benefits in patients with non-small cell lung cancer (NSCLC). However, approximately 45% of patients experience primary treatment resistance, necessitating the development of strategies to improve efficacy. While the renin-angiotensin system (RAS) has systemic hemodynamic effects, tissue-specific regulation exists along with modulation of immune activity in part through regulation of myeloid cell activity, leading to the hypothesis that RAS inhibition may improve anti-PD-1/L-1 efficacy. A retrospective analysis was conducted that included 173 advanced solid tumor cancer patients treated at Valley Hospital, a community Hospital in New Jersey, USA, who were treated with a PD-1/L-1 inhibitor in a defined time period showing a statistically significant relationship between RAS pathway inhibition (RASi through concomitant treatment with an ACE inhibitor or angiotensin receptor blocker) and positive efficacy to the immunotherapy that was independent of age, gender and cancer type. Subset analysis revealed strong numerical benefit for efficacy in both patients with squamous and nonsquamous NSCLC as determined by documented clinician assessment of efficacy and by duration of therapy. A PUBMED literature search was now conducted to identify studies assessing the effect of RAS pathway inhibition on anti-PD-1/L1 efficacy in advanced solid tumor patients and compare these findings to those seen in the Valley Hospital retrospective study with a focus on NSCLC specifically. A total of 11 articles were identified assessing the effects of RAS pathway inhibition on the efficacy of checkpoint inhibitor immunotherapy in advanced cancer patients. Of the 11 studies, 10 assessed the effect on survival of RASi in the context of treatment with anti-PD-1/PD-L1, while one assessed the effect on CTLA-4 inhibition. Eight of the studies included patients with NSCLC, while the remaining 2 were specific to genitourinary malignancies. Of the 8 studies, two were specific to NSCLC patients, with the remaining 6 studies including a range of cancer types, of which NSCLC was one. Of these 6 studies, only 2 reported specific survival data for the NSCLC subpopulation. Patient characteristics, multivariate analysis data and efficacy data seen in the 2 NSLCLC specific studies and in the 2 basket studies, which provided data on the NSCLC subpopulation, were compared to that seen in the Valley Hospital retrospective study supporting a broader effect of RASi on anti-PD-1/L1 efficacy in advanced NSLCLC with the majority of studies showing statistically significant benefit or strong statistical trends but with one study demonstrating worsened outcomes. This comparison of studies extends published findings to the community hospital setting and supports prospective assessment through randomized clinical trials of efficacy in NSCLC patients with pharmacodynamic components to determine the effect on immune cell activity in tumors and on the composition of the tumor microenvironment.Keywords: immunotherapy, cancer, angiotensin, efficacy, PD-1, lung cancer, NSCLC
Procedia PDF Downloads 69141 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 146140 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions
Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes
Abstract:
The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning
Procedia PDF Downloads 72139 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems
Authors: Bronwen Wade
Abstract:
Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality
Procedia PDF Downloads 54138 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 90137 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 46