Search results for: vehicle and road features-based classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4348

Search results for: vehicle and road features-based classification

268 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 74
267 Online Course of Study and Job Crafting for University Students: Development Work and Feedback

Authors: Hannele Kuusisto, Paivi Makila, Ursula Hyrkkanen

Abstract:

Introduction: There have been arguments about the skills university students should have when graduated. Current trends argue that as well as the specific job-related skills the graduated students need problem-solving, interaction and networking skills as well as self-management skills. Skills required in working life are also considered in the Finnish national project called VALTE (short for 'prepared for working life'). The project involves 11 Finnish school organizations. As one result of this project, a five-credit independent online course in study and job engagement as well as in study and job crafting was developed at Turku University of Applied Sciences. The aim of the oral or e-poster presentation is to present the online course developed in the project. The purpose of this abstract is to present the development work of the online course and the feedback received from the pilots. Method: As the University of Turku is the leading partner of the VALTE project, the collaborative education platform ViLLE (https://ville.utu.fi, developed by the University of Turku) was chosen as the online platform for the course. Various exercise types with automatic assessment were used; for example, quizzes, multiple-choice questions, classification exercises, gap filling exercises, model answer questions, self-assessment tasks, case tasks, and collaboration in Padlet. In addition, the free material and free platforms on the Internet were used (Youtube, Padlet, Todaysmeet, and Prezi) as well as the net-based questionnaires about the study engagement and study crafting (made with Webropol). Three teachers with long teaching experience (also with job crafting and online pedagogy) and three students working as trainees in the project developed the content of the course. The online course was piloted twice in 2017 as an elective course for the students at Turku University of Applied Sciences, a higher education institution of about 10 000 students. After both pilots, feedback from the students was gathered and the online course was developed. Results: As the result, the functional five-credit independent online course suitable for students of different educational institutions was developed. The student feedback shows that students themselves think that the developed online course really enhanced their job and study crafting skills. After the course, 91% of the students considered their knowledge in job and study engagement as well as in job and study crafting to be at a good or excellent level. About two-thirds of the students were going to exploit their knowledge significantly in the future. Students appreciated the variability and the game-like feeling of the exercises as well as the opportunity to study online at the time and place they chose themselves. On a five-point scale (1 being poor and 5 being excellent), the students graded the clarity of the ViLLE platform as 4.2, the functionality of the platform as 4.0 and the easiness of operating as 3.9.

Keywords: job crafting, job engagement, online course, study crafting, study engagement

Procedia PDF Downloads 138
266 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 75
265 Sustainable Business Model Archetypes – A Systematic Review and Application to the Plastic Industry

Authors: Felix Schumann, Giorgia Carratta, Tobias Dauth, Liv Jaeckel

Abstract:

In the last few decades, the rapid growth of the use and disposal of plastic items has led to their overaccumulation in the environment. As a result, plastic pollution has become a subject of global concern. Today plastics are used as raw materials in almost every industry. While the recognition of the ecological, social, and economic impact of plastics in academic research is on the rise, the potential role of the ‘plastic industry’ in dealing with such issues is still largely underestimated. Therefore, the literature on sustainable plastic management is still nascent and fragmented. Working towards sustainability requires a fundamental shift in the way companies employ plastics in their day-to-day business. For that reason, the applicability of the business model concept has recently gained momentum in environmental research. Business model innovation is increasingly recognized as an important driver to re-conceptualize the purpose of the firm and to readily integrate sustainability in their business. It can serve as a starting point to investigate whether and how sustainability can be realized under industry- and firm-specific circumstances. Yet, there is no comprehensive view in the plastic industry on how firms start refining their business models to embed sustainability in their operations. Our study addresses this gap, looking primarily at the industrial sectors responsible for the production of the largest amount of plastic waste today: plastic packaging, consumer goods, construction, textile, and transport. Relying on the archetypes of sustainable business models and applying them to the aforementioned sectors, we try to identify companies’ current strategies to make their business models more sustainable. Based on the thematic clustering, we can develop an integrative framework for the plastic industry. The findings are underpinned and illustrated by a variety of relevant plastic management solutions that the authors have identified through a systematic literature review and analysis of existing, empirically grounded research in this field. Using the archetypes, we can promote options for business model innovations for the most important sectors in which plastics are used. Moreover, by linking the proposed business model archetypes to the plastic industry, our research approach guides firms in exploring sustainable business opportunities. Likewise, researchers and policymakers can utilize our classification to identify best practices. The authors believe that the study advances the current knowledge on sustainable plastic management through its broad empirical industry analyses. Hence, the application of business model archetypes in the plastic industry will be useful for shaping companies’ transformation to create and deliver more sustainability and provides avenues for future research endeavors.

Keywords: business models, environmental economics, plastic management, plastic pollution, sustainability

Procedia PDF Downloads 74
264 Cytotoxicological Evaluation of a Folate Receptor Targeting Drug Delivery System Based on Cyclodextrins

Authors: Caroline Mendes, Mary McNamara, Orla Howe

Abstract:

For chemotherapy, a drug delivery system should be able to specifically target cancer cells and deliver the therapeutic dose without affecting normal cells. Folate receptors (FR) can be considered key targets since they are commonly over-expressed in cancer cells and they are the molecular marker used in this study. Here, cyclodextrin (CD) has being studied as a vehicle for delivering the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within the CD cavity. In this study, β-CD has been modified using folic acid so as to specifically target the FR molecular marker. Thus, the system studied here for drug delivery consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 9.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 10.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 8.5 for A549 and 132.6 µM ± 12.1 and 288.1 µM ± 16.3 for BEAS-2B. These results demonstrate that MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug.

Keywords: cyclodextrins, cancer treatment, drug delivery, folate receptors, reduced folate carriers

Procedia PDF Downloads 287
263 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program

Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun

Abstract:

A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.

Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system

Procedia PDF Downloads 181
262 Momentum in the Stock Exchange of Thailand

Authors: Mussa Hussaini, Supasith Chonglerttham

Abstract:

Stocks are usually classified according to their characteristics which are unique enough such that the performance of each category can be differentiated from another. The reasons behind such classifications in the financial market are sometimes financial innovation or it can also be because of finding a premium in a group of stocks with similar features. One of the major classifications in stocks market is called momentum strategy. Based on this strategy stocks are classified according to their past performances into past winners and past losers. Momentum in a stock market refers to the idea that stocks will keep moving in the same direction. In other word, stocks with rising prices (past winners stocks) will continue to rise and those stocks with falling prices (past losers stocks) will continue to fall. The performance of this classification has been well documented in numerous studies in different countries. These studies suggest that past winners tend to outperform past losers in the future. However, academic research in this direction has been limited in countries such as Thailand and to the best of our knowledge, there has been no such study in Thailand after the financial crisis of 1997. The significance of this study stems from the fact that Thailand is an open market and has been encouraging foreign investments as one of the means to enhance employment, promote economic development, and technology transfer and the main equity market in Thailand, the Stock Exchange of Thailand is a crucial channel for Foreign Investment inflow into the country. The equity market size in Thailand increased from $1.72 billion in 1984 to $133.66 billion in 1993, an increase of over 77 times within a decade. The main contribution of this paper is evidence for size category in the context of the equity market in Thailand. Almost all previous studies have focused solely on large stocks or indices. This paper extends the scope beyond large stocks and indices by including small and tiny stocks as well. Further, since there is a distinct absence of detailed academic research on momentum strategy in the Stock Exchange of Thailand after the crisis, this paper also contributes to the extension of existing literature of the study. This research is also of significance for those researchers who would like to compare the performance of this strategy in different countries and markets. In the Stock Exchange of Thailand, we examined the performance of momentum strategy from 2010 to 2014. Returns on portfolios are calculated on monthly basis. Our results on momentum strategy confirm that there is positive momentum profit in large size stocks whereas there is negative momentum profit in small size stocks during the period of 2010 to 2014. Furthermore, the equal weighted average of momentum profit of both small and large size category do not provide any indication of overall momentum profit.

Keywords: momentum strategy, past loser, past winner, stock exchange of Thailand

Procedia PDF Downloads 296
261 Safety and Feasibility of Distal Radial Balloon Aortic Valvuloplasty - The DR-BAV Study

Authors: Alexandru Achim, Tamás Szűcsborus, Viktor Sasi, Ferenc Nagy, Zoltán Jambrik, Attila Nemes, Albert Varga, Călin Homorodean, Olivier F. Bertrand, Zoltán Ruzsa

Abstract:

Aim: Our study aimed to establish the safety and the technical success of distal radial access for balloon aortic valvuloplasty (DR-BAV). The secondary objective was to determine the effectiveness and appropriate role of DR-BAV within half year follow-up. Methods: Clinical and angiographic data from 32 consecutive patients with symptomatic aortic stenosis were evaluated in a prospective pilot single-center study. Between 2020 and 2021, the patients were treated utilizing dual distal radial access with 6-10F compatible balloons. The efficacy endpoint was divided into technical success (successful valvuloplasty balloon inflation at the aortic valve and absence of intra- or periprocedural major complications), hemodynamic success (a reduction of the mean invasive gradient >30%), and clinical success (an improvement of at least one clinical category in the NYHA classification). The safety endpoints were vascular complications (major and minor Valve Academic Research Consortium (VARC)-2 bleeding, diminished or lost arterial pulse or the presence of any pseudo-aneurysm or arteriovenous fistula during the clinical follow-up) and major adverse events, MAEs (the composite of death, stroke, myocardial infarction, and urgent major aortic valve replacement or implantation during the hospital stay and or at one-month follow-up). Results: 32 patients (40 % male, mean age 80 ± 8,5) with severe aortic valve stenosis were included in the study and 4 patients were excluded. Technical success was achieved in all patients (100%). Hemodynamic success was achieved in 30 patients (93,75%). Invasive max and mean gradients were reduced from 73±22 mm Hg and 49±22 mm Hg to 49±19 mm Hg and 20±13 mm Hg, respectively (p = <.001). Clinical success was achieved in 29 patients (90,6%). In total, no major adverse cardiac or cerebrovascular event nor vascular complications (according to VARC 2 criteria) occurred during the intervention. All-cause death at 6 months was 12%. Conclusion: According to our study, dual distal radial artery access is a safe and effective option for balloon aortic valvuloplasty in patients with severe aortic valve stenosis and can be performed in all patients with sufficient lumen diameter. Future randomized studies are warranted to investigate whether this technique is superior to other approaches.

Keywords: mean invasive gradient, distal radial access for balloon aortic valvuloplasty (DR-BAV), aortic valve stenosis, pseudo-aneurysm, arteriovenous fistula, valve academic research consortium (VARC)-2

Procedia PDF Downloads 74
260 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 157
259 Preserving the Cultural Values of the Mararoa River and Waipuna–Freshwater Springs, Southland New Zealand: An Integration of Traditional and Scientific Knowledge

Authors: Erine van Niekerk, Jason Holland

Abstract:

In Māori culture water is considered to be the foundation of all life and has its own mana (spiritual power) and mauri (life force). Water classification for cultural values therefore includes categories like waitapu (sacred water), waimanawa-whenua (water from under the land), waipuna (freshwater springs), the relationship between water quantity and quality and the relationship between surface and groundwater. Particular rivers and lakes have special significance to iwi and hapu for their rohe (tribal areas). The Mararoa River, including its freshwater springs and wetlands, is an example of such an area. There is currently little information available about the sources, characteristics and behavior of these important water resources and this study on the water quality of the Mararoa River and adjacent freshwater springs will provide valuable information to be used in informed decisions about water management. The regional council of Southland, Environment Southland, is required to make changes under their water quality policy in order to comply with the requirements for the New National Standards for Freshwater to consult with Maori to determine strategies for decision making. This requires an approach that includes traditional knowledge combined with scientific knowledge in the decision-making process. This study provided the scientific data that can be used in future for decision making on fresh water springs combined with traditional values for this particular area. Several parameters have been tested in situ as well as in a laboratory. Parameters such as temperature, salinity, electrical conductivity, Total Dissolved Solids, Total Kjeldahl Nitrogen, Total Phosphorus, Total Suspended Solids, and Escherichia coli among others show that recorded values of all test parameters fall within recommended ANZECC guidelines and Environment Southland standards and do not raise any concerns for the water quality of the springs and the river at the moment. However, the destruction of natural areas, particularly due to changes in farming practices, and the changes to water quality by the introduction of Didymosphenia geminate (Didymo) means Māori have already lost many of their traditional mahinga kai (food sources). There is a major change from land use such as sheep farming to dairying in Southland which puts freshwater resources under pressure. It is, therefore, important to draw on traditional knowledge and spirituality alongside scientific knowledge to protect the waters of the Mararoa River and waipuna. This study hopes to contribute to scientific knowledge to preserve the cultural values of these significant waters.

Keywords: cultural values, freshwater springs, Maori, water quality

Procedia PDF Downloads 257
258 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 545
257 Zoledronic Acid with Neoadjuvant Chemotherapy in Advanced Breast Cancer Prospective Study 2011–2014

Authors: S. Sakhri

Abstract:

Background: The use of Zoledronic acid (ZA) is an established place in the treatment of malignant tumors with a predilection for the skeleton of interest (in particular metastasis). Although the main target of Zoledronic acid was osteoclasts, there are preclinical data suggest that Zoledronic acid may have an antitumor effect on cells other than osteoclasts, including tumor cells. Antitumor activity, including the inhibition of tumor cell growth and the induction of apoptosis of tumor cells, inhibition of tumor cell adhesion and invasion, and anti-angiogenic effects have been demonstrated. Methods. From (2012 to 2014), 438 patients were included respondents the inclusion criteria, respectively. This is a prospective study over a 4 year period. Of all patients (N=438), 432 received neoadjuvant chemotherapy with Zoledronic acid. The primary end point was the pathologic complete response in advancer breast cancer stage. The secondary end point is to evaluate Clinical response according to RECIST criteria; estimate the bone density before and at the end of chemotherapy in women with locally advanced breast cancer, Toxicity Evaluation and Overall survival using Kaplan-Meier and log test. Result: The Objective response rate was 97% after (C4) with 3% stabilizations and 99, 3% of which 0.7% C8 after stabilization. The clinical complete response was 28% after C4 respectively, and 46.8% after C8, the pathologic complete response rate was 40.13% according to the classification Sataloff. We observed that the pathologic complete response rate was the most raised in the group including Her2 (luminal Her2 and Her2) the lowest in the triple negative group as classified by Sataloff. We found that the pCR is significantly higher in the age group (35-50 years) with 53.17%. Those who have more than 50 years in 2nd place with 27.7% and the lower in young woman 35 years pCR was 19%, not statistically significant, -The pCR was also in favor of the menopausal group in 51, 4%, and 48, 55% for non-menopausal women. The average duration of overall survival was also significantly in the subgroup (Luminal -Her2, Her2) compared with triple negative. It is 47.18 months in the luminal group vs. 38.95 in the triple negative group. -Was observed in our study a difference in quality of life between (C1) was the admission of the patient, and after (C8), we found an increase in general signs and a deterioration in the psychological state C1, in contrast to the C8 these general signs and mental status improves, up to 12, and 24 months. Conclusion The results of this study suggest that the addition of ZA to néoadjuvant CT has potential anti-cancer benefit in patients (Luminal -Her2, Her2) compared with triple negative with or without menopause status.

Keywords: HER2+, RH+, breast cancer, tyrosine kinase

Procedia PDF Downloads 192
256 Predicting Resistance of Commonly Used Antimicrobials in Urinary Tract Infections: A Decision Tree Analysis

Authors: Meera Tandan, Mohan Timilsina, Martin Cormican, Akke Vellinga

Abstract:

Background: In general practice, many infections are treated empirically without microbiological confirmation. Understanding susceptibility of antimicrobials during empirical prescribing can be helpful to reduce inappropriate prescribing. This study aims to apply a prediction model using a decision tree approach to predict the antimicrobial resistance (AMR) of urinary tract infections (UTI) based on non-clinical features of patients over 65 years. Decision tree models are a novel idea to predict the outcome of AMR at an initial stage. Method: Data was extracted from the database of the microbiological laboratory of the University Hospitals Galway on all antimicrobial susceptibility testing (AST) of urine specimens from patients over the age of 65 from January 2011 to December 2014. The primary endpoint was resistance to common antimicrobials (Nitrofurantoin, trimethoprim, ciprofloxacin, co-amoxiclav and amoxicillin) used to treat UTI. A classification and regression tree (CART) model was generated with the outcome ‘resistant infection’. The importance of each predictor (the number of previous samples, age, gender, location (nursing home, hospital, community) and causative agent) on antimicrobial resistance was estimated. Sensitivity, specificity, negative predictive (NPV) and positive predictive (PPV) values were used to evaluate the performance of the model. Seventy-five percent (75%) of the data were used as a training set and validation of the model was performed with the remaining 25% of the dataset. Results: A total of 9805 UTI patients over 65 years had their urine sample submitted for AST at least once over the four years. E.coli, Klebsiella, Proteus species were the most commonly identified pathogens among the UTI patients without catheter whereas Sertia, Staphylococcus aureus; Enterobacter was common with the catheter. The validated CART model shows slight differences in the sensitivity, specificity, PPV and NPV in between the models with and without the causative organisms. The sensitivity, specificity, PPV and NPV for the model with non-clinical predictors was between 74% and 88% depending on the antimicrobial. Conclusion: The CART models developed using non-clinical predictors have good performance when predicting antimicrobial resistance. These models predict which antimicrobial may be the most appropriate based on non-clinical factors. Other CART models, prospective data collection and validation and an increasing number of non-clinical factors will improve model performance. The presented model provides an alternative approach to decision making on antimicrobial prescribing for UTIs in older patients.

Keywords: antimicrobial resistance, urinary tract infection, prediction, decision tree

Procedia PDF Downloads 234
255 Incidence of Breast Cancer and Enterococcus Infection: A Retrospective Analysis

Authors: Matthew Cardeiro, Amalia D. Ardeljan, Lexi Frankel, Dianela Prado Escobar, Catalina Molnar, Omar M. Rashid

Abstract:

Introduction: Enterococci comprise the natural flora of nearly all animals and are ubiquitous in food manufacturing and probiotics. However, its role in the microbiome remains controversial. The gut microbiome has shown to play an important role in immunology and cancer. Further, recent data has suggested a relationship between gut microbiota and breast cancer. These studies have shown that the gut microbiome of patients with breast cancer differs from that of healthy patients. Research regarding enterococcus infection and its sequala is limited, and further research is needed in order to understand the relationship between infection and cancer. Enterococcus may prevent the development of breast cancer (BC) through complex immunologic and microbiotic adaptations following an enterococcus infection. This study investigated the effect of enterococcus infection and the incidence of BC. Methods: A retrospective study (January 2010- December 2019) was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using a Humans Health Insurance Database. International Classification of Disease (ICD) 9th and 10th codes, Current Procedural Terminology (CPT), and National Drug Codes were used to identify BC diagnosis and enterococcus infection. Patients were matched for age, sex, Charlson Comorbidity Index (CCI), antibiotic treatment, and region of residence. Chi-squared, logistic regression, and odds ratio were implemented to assess the significance and estimate relative risk. Results: 671 out of 28,518 (2.35%) patients with a prior enterococcus infection and 1,459 out of 28,518 (5.12%) patients without enterococcus infection subsequently developed BC, and the difference was statistically significant (p<2.2x10⁻¹⁶). Logistic regression also indicated enterococcus infection was associated with a decreased incidence of BC (RR=0.60, 95% CI [0.57, 0.63]). Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 398 out of 11,523 (3.34%) patients with a prior enterococcus infection and treated with antibiotics were compared to 624 out of 11,523 (5.41%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed BC. Results remained statistically significant (p<2.2x10-16) with a relative risk of 0.57 (95% CI [0.54, 0.60]). Conclusion & Discussion: This study shows a statistically significant correlation between enterococcus infection and a decrease incidence of breast cancer. Further exploration is needed to identify and understand not only the role of enterococcus in the microbiome but also the protective mechanism(s) and impact enterococcus infection may have on breast cancer development. Ultimately, further research is needed in order to understand the complex and intricate relationship between the microbiome, immunology, bacterial infections, and carcinogenesis.

Keywords: breast cancer, enterococcus, immunology, infection, microbiome

Procedia PDF Downloads 156
254 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 168
253 Spare Part Carbon Footprint Reduction with Reman Applications

Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş

Abstract:

Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.

Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability

Procedia PDF Downloads 61
252 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 128
251 The Incidence of Prostate Cancer in Previous Infected E. Coli Population

Authors: Andreea Molnar, Amalia Ardeljan, Lexi Frankel, Marissa Dallara, Brittany Nagel, Omar Rashid

Abstract:

Background: Escherichia coli is a gram-negative, facultative anaerobic bacteria that belongs to the family Enterobacteriaceae and resides in the intestinal tracts of individuals. E.Coli has numerous strains grouped into serogroups and serotypes based on differences in antigens in their cell walls (somatic, or “O” antigens) and flagella (“H” antigens). More than 700 serotypes of E. coli have been identified. Although most strains of E. coli are harmless, a few strains, such as E. coli O157:H7 which produces Shiga toxin, can cause intestinal infection with symptoms of severe abdominal cramps, bloody diarrhea, and vomiting. Infection with E. Coli can lead to the development of systemic inflammation as the toxin exerts its effects. Chronic inflammation is now known to contribute to cancer development in several organs, including the prostate. The purpose of this study was to evaluate the correlation between E. Coli and the incidence of prostate cancer. Methods: Data collected in this cohort study was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to evaluate patients infected with E.Coli infection and prostate cancer using the International Classification of Disease (ICD-10 and ICD-9 codes). Permission to use the database was granted by Holy Cross Health, Fort Lauderdale for the purpose of academic research. Data analysis was conducted through the use of standard statistical methods. Results: Between January 2010 and December 2019, the query was analyzed and resulted in 81, 037 patients after matching in both infected and control groups, respectively. The two groups were matched by Age Range and CCI score. The incidence of prostate cancer was 2.07% and 1,680 patients in the E. Coli group compared to 5.19% and 4,206 patients in the control group. The difference was statistically significant by a p-value p<2.2x10-16 with an Odds Ratio of 0.53 and a 95% CI. Based on the specific treatment for E.Coli, the infected group vs control group were matched again with a result of 31,696 patients in each group. 827 out of 31,696 (2.60%) patients with a prior E.coli infection and treated with antibiotics were compared to 1634 out of 31,696 (5.15%) patients with no history of E.coli infection (control) and received antibiotic treatment. Both populations subsequently developed prostate carcinoma. Results remained statistically significant (p<2.2x10-16), Odds Ratio=0.55 (95% CI 0.51-0.59). Conclusion: This retrospective study shows a statistically significant correlation between E.Coli infection and a decreased incidence of prostate cancer. Further evaluation is needed in order to identify the impact of E.Coli infection and prostate cancer development.

Keywords: E. Coli, prostate cancer, protective, microbiology

Procedia PDF Downloads 193
250 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 212
249 Investigation of Resilient Circles in Local Community and Industry: Waju-Traditional Culture in Japan and Modern Technology Application

Authors: R. Ueda

Abstract:

Today global society is seeking resilient partnership in local organizations and individuals, which realizes multi-stakeholders relationship. Although it is proposed by modern global framework of sustainable development, it is conceivable that such affiliation can be found out in the traditional local community in Japan, and that traditional spirit is tacitly sustaining in modern context of disaster mitigation in society and economy. Then this research is aiming to clarify and analyze implication for the global world by actual case studies. Regional and urban resilience is the ability of multi-stakeholders to cooperate flexibly and to adapt in response to changes in the circumstances caused by disasters, but there are various conflicts affecting coordination of disaster relief measures. These conflicts arise not only from a lack of communication and an insufficient network, but also from the difficulty to jointly draw common context from fragmented information. This is because of the weakness of our modern engineering which focuses on maintenance and restoration of individual systems. Here local ‘circles’ holistically includes local community and interacts periodically. Focusing on examples of resilient organizations and wisdom created in communities, what can be seen throughout history is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. And the wisdom of a solid and autonomous disaster prevention formed by the historical community called’ Waju’ – an area surrounded by circle embankment to protect the settlement from flood – lives on in government efforts of the coastal industrial island of today. Industrial company there collaborates to create a circle including common evacuation space, road access improvement and infrastructure recovery. These days, people here adopts new interface technology. Large-scale AR- Augmented Reality for more than hundred people is expressing detailed hazard by tsunami and liquefaction. Common experiences of the major disaster space and circle of mutual discussion are enforcing resilience. Collaboration spirit lies in the center of circle. A consistent key point is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. This writer believes that both self-governing human organizations and the societal implementation of technical systems are necessary. Infrastructure should be autonomously instituted by associations of companies and other entities in industrial areas for working closely with local governments. To develop advanced disaster prevention and multi-stakeholder collaboration, partnerships among industry, government, academia and citizens are important.

Keywords: industrial recovery, multi-sakeholders, traditional culture, user experience, Waju

Procedia PDF Downloads 97
248 Lipid-Coated Magnetic Nanoparticles for Frequency Triggered Drug Delivery

Authors: Yogita Patil-Sen

Abstract:

Superparamagnetic Iron Oxide Nanoparticles (SPIONs) have become increasingly important materials for separation of specific bio-molecules, drug delivery vehicle, contrast agent for MRI and magnetic hyperthermia for cancer therapy. Hyperthermia is emerging as an alternative cancer treatment to the conventional radio- and chemo-therapy, which have harmful side effects. When subjected to an alternating magnetic field, the magnetic energy of SPIONs is converted into thermal energy due to movement of particles. The ability of SPIONs to generate heat and potentially kill cancerous cells, which are more susceptible than the normal cells to temperatures higher than 41 °C forms the basis of hyerpthermia treatement. The amount of heat generated depends upon the magnetic properties of SPIONs which in turn is affected by their properties such as size and shape. One of the main problems associated with SPIONs is particle aggregation which limits their employability in in vivo drug delivery applications and hyperthermia cancer treatments. Coating the iron oxide core with thermally responsive lipid based nanostructures tend to overcome the issue of aggregation as well as improve biocompatibility and can enhance drug loading efficiency. Herein we report suitability of SPIONs and silica coated core-shell SPIONs, which are further, coated with various lipids for drug delivery and magnetic hyperthermia applications. The synthesis of nanoparticles is carried out using the established methods reported in the literature with some modifications. The nanoparticles are characterised using Infrared spectroscopy (IR), X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM) and Vibrating Sample Magnetometer (VSM). The heating ability of nanoparticles is tested under alternating magnetic field. The efficacy of the nanoparticles as drug carrier is also investigated. The loading of an anticancer drug, Doxorubicin at 18 °C is measured up to 48 hours using UV-visible spectrophotometer. The drug release profile is obtained under thermal incubation condition at 37 °C and compared with that under the influence of alternating magnetic field. The results suggest that the nanoparticles exhibit superparamagnetic behaviour, although coating reduces the magnetic properties of the particles. Both the uncoated and coated particles show good heating ability, again it is observed that coating decreases the heating behaviour of the particles. However, coated particles show higher drug loading efficiency than the uncoated particles and the drug release is much more controlled under the alternating magnetic field. Thus, the results demonstrate that lipid coated SPIONs exhibit potential as drug delivery vehicles for magnetic hyperthermia based cancer therapy.

Keywords: drug delivery, hyperthermia, lipids, superparamagnetic iron oxide nanoparticles (SPIONS)

Procedia PDF Downloads 211
247 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami

Authors: Alejandra G. Barbón, José Vila, Dania Vazquez

Abstract:

This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.

Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience

Procedia PDF Downloads 52
246 Mechanical Properties of Carbon Fibre Reinforced Thermoplastic Composites Consisting of Recycled Carbon Fibres and Polyamide 6 Fibres

Authors: Mir Mohammad Badrul Hasan, Anwar Abdkader, Chokri Cherif

Abstract:

With the increasing demand and use of carbon fibre reinforced composites (CFRC), disposal of the carbon fibres (CF) and end of life composite parts is gaining tremendous importance on the issue especially of sustainability. Furthermore, a number of processes (e. g. pyrolysis, solvolysis, etc.) are available currently to obtain recycled CF (rCF) from end-of-life CFRC. Since the CF waste or rCF are neither allowed to be thermally degraded nor landfilled (EU Directive 1999/31/EC), profitable recycling and re-use concepts are urgently necessary. Currently, the market for materials based on rCF mainly consists of random mats (nonwoven) made from short fibres. The strengths of composites that can be achieved from injection-molded components and from nonwovens are between 200-404 MPa and are characterized by low performance and suitable for non-structural applications such as in aircraft and vehicle interiors. On the contrary, spinning rCF to yarn constructions offers good potential for higher CFRC material properties due to high fibre orientation and compaction of rCF. However, no investigation is reported till yet on the direct comparison of the mechanical properties of thermoplastic CFRC manufactured from virgin CF filament yarn and spun yarns from staple rCF. There is a lack of understanding on the level of performance of the composites that can be achieved from hybrid yarns consisting of rCF and PA6 fibres. In this drop back, extensive research works are being carried out at the Textile Machinery and High-Performance Material Technology (ITM) on the development of new thermoplastic CFRC from hybrid yarns consisting of rCF. For this purpose, a process chain is developed at the ITM starting from fibre preparation to hybrid yarns manufacturing consisting of staple rCF by mixing with thermoplastic fibres. The objective is to apply such hybrid yarns for the manufacturing of load bearing textile reinforced thermoplastic CFRCs. In this paper, the development of innovative multi-component core-sheath hybrid yarn structures consisting of staple rCF and polyamide 6 (PA 6) on a DREF-3000 friction spinning machine is reported. Furthermore, Unidirectional (UD) CFRCs are manufactured from the developed hybrid yarns, and the mechanical properties of the composites such as tensile and flexural properties are analyzed. The results show that the UD composite manufactured from the developed hybrid yarns consisting of staple rCF possesses approximately 80% of the tensile strength and E-module to those produced from virgin CF filament yarn. The results show a huge potential of the DREF-3000 friction spinning process to develop composites from rCF for high-performance applications.

Keywords: recycled carbon fibres, hybrid yarn, friction spinning, thermoplastic composite

Procedia PDF Downloads 239
245 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 119
244 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 110
243 Modern Pilgrimage Narratives and India’s Heterogeneity

Authors: Alan Johnson

Abstract:

This paper focuses on modern pilgrimage narratives about sites affiliated with Indian religious expressions located both within and outside India. The paper uses a multidisciplinary approach to examine poetry, personal essays, and online attestations of pilgrimage to illustrate how non-religious ideas coexist with outwardly religious ones, exemplifying a characteristically Indian form of syncretism that pre-dates Western ideas of pluralism. The paper argues that the syncretism on display in these modern creative works refutes the current exclusionary vision of India as a primordially Hindu-nationalist realm. A crucial premise of this argument is that the narrative’s intrinsic heteroglossia, so evident in India’s historically rich variety of stories and symbols, belies this reactionary version of Hindu nationalism. Equally important to this argument, therefore, is the vibrancy of Hindu sites outside India, such as the Batu Caves temple complex in Kuala Lumpur, Malaysia. The literary texts examined in this paper include, first, Arun Kolatkar’s famous 1976 collection of poems, titled Jejuri, about a visit to the pilgrimage site of the same name in Maharashtra. Here, the modern, secularized visitor from Bombay (Mumbai) contemplates the effect of the temple complex on himself and on the other, more worshipful visitors. Kolatkar’s modernist poems reflect the narrator’s typically modern-Indian ambivalence for holy ruins, for although they do not evoke a conventionally religious feeling in him, they nevertheless possess an aura of timelessness that questions the narrator’s time-conscious sensibility. The paper bookends Kolatkar’s Jejuri with considerations of an early-twentieth-century text, online accounts by visitors to the Batu Caves, and a recent, more conventional Hindu account of pilgrimage. For example, the pioneering graphic artist Mukul Chandra Dey published in 1917, My Pilgrimages to Ajanta and Bagh, in which he devotes an entire chapter to the life of the Buddha as a means of illustrating the layering of stories that is a characteristic feature of sacred sites in India. In a different but still syncretic register, Jawaharlal Nehru, India’s first prime minister, and a committed secularist proffers India’s ancient pilgrimage network as a template for national unity in his classic 1946 autobiography The Discovery of India. Narrative is the perfect vehicle for highlighting this layering of sensibilities, for a single text can juxtapose the pilgrim-narrator’s description with that of a far older pilgrimage, a juxtaposition that establishes an imaginative connection between otherwise distanced actors, and between them and the reader.

Keywords: India, literature, narrative, syncretism

Procedia PDF Downloads 137
242 A Review of Gas Hydrate Rock Physics Models

Authors: Hemin Yuan, Yun Wang, Xiangchun Wang

Abstract:

Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.

Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology

Procedia PDF Downloads 137
241 Effects of the In-Situ Upgrading Project in Afghanistan: A Case Study on the Formally and Informally Developed Areas in Kabul

Authors: Maisam Rafiee, Chikashi Deguchi, Akio Odake, Minoru Matsui, Takanori Sata

Abstract:

Cities in Afghanistan have been rapidly urbanized; however, many parts of these cities have been developed with no detailed land use plan or infrastructure. In other words, they have been informally developed without any government leadership. The new government started the In-situ Upgrading Project in Kabul to upgrade roads, the water supply network system, and the surface water drainage system on the existing street layout in 2002, with the financial support of international agencies. This project is an appropriate emergency improvement for living life, but not an essential improvement of living conditions and infrastructure problems because the life expectancies of the improved facilities are as short as 10–15 years, and residents cannot obtain land tenure in the unplanned areas. The Land Readjustment System (LRS) conducted in Japan has good advantages that rearrange irregularly shaped land lots and develop the infrastructure effectively. This study investigates the effects of the In-situ Upgrading Project on private investment, land prices, and residents’ satisfaction with projects in Kart-e-Char, where properties are registered, and in Afshar-e-Silo Lot 1, where properties are unregistered. These projects are located 5 km and 7 km from the CBD area of Kabul, respectively. This study discusses whether LRS should be applied to the unplanned area based on the questionnaire and interview responses of experts experienced in the In-situ Upgrading Project who have knowledge of LRS. The analysis results reveal that, in Kart-e-Char, a lot of private investment has been made in the construction of medium-rise (five- to nine-story) buildings for commercial and residential purposes. Land values have also incrementally increased since the project, and residents are commonly satisfied with the road pavement, drainage systems, and water supplies, but dissatisfied with the poor delivery of electricity as well as the lack of public facilities (e.g., parks and sport facilities). In Afshar-e-Silo Lot 1, basic infrastructures like paved roads and surface water drainage systems have improved from the project. After the project, a few four- and five-story residential buildings were built with very low-level private investments, but significant increases in land prices were not evident. The residents are satisfied with the contribution ratio, drainage system, and small increase in land price, but there is still no drinking water supply system or tenure security; moreover, there are substandard paved roads and a lack of public facilities, such as parks, sport facilities, mosques, and schools. The results of the questionnaire and interviews with the four engineers highlight the problems that remain to be solved in the unplanned areas if LRS is applied—namely, land use differences, types and conditions of the infrastructure still to be installed by the project, and time spent for positive consensus building among the residents, given the project’s budget limitation.

Keywords: in-situ upgrading, Kabul city, land readjustment, land value, planned area, private investment, residents' satisfaction, unplanned area

Procedia PDF Downloads 181
240 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights

Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy

Abstract:

The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.

Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems

Procedia PDF Downloads 55
239 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 44