Search results for: interval features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4659

Search results for: interval features

3309 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models

Procedia PDF Downloads 445
3308 Induction of Labor Using Misoprostol with or without Mifepristone in Intrauterine Death: A Randomized Controlled Study

Authors: Ajay Agrawal, Pritha Basnet, Achala Thakur, Pappu Rizal, Rubina Rai

Abstract:

Context: Rapid expulsion of fetus in intrauterine fetal death (IUFD) is usually requested without any medical grounds for it. So; an efficient, safe method for induction of labor (IOL) is required. Objective: To determine if pre-treatment with mifepristone followed by IOL with misoprostol in late IUFD is more efficacious. Methods: We conducted a randomized controlled trial in 100 patients. Group-A women received single oral dose of 200 mg mifepristone, followed by induction with vaginal misoprostol after 24-hour. Group-B women were induced only with vaginal misoprostol. In each group 5 dose of misoprostol was used 4 hourly. If first cycle was unsuccessful, after break of 12 hour, second course of misoprostol was started. The primary outcome was a measure of induction to delivery time and vaginal delivery within 24 hours. Secondary outcome was to measure need of oxytocin and complications. Results: Maternal age, parity and period of gestation were comparable between groups. Number of misoprostol dose needed in group A was significantly less than group B. Mann Whitney U test showed, women in group A had significantly earlier onset of labor, however total induction to delivery interval was not significant. In group-A, 85.7% delivered within 24 hours of first dose of misoprostol while in group-B 70% delivered within 24 hour (p=0.07). More women in Group B required oxytocin. Conclusion: Pretreatment with mifepristone before IOL following late IUFD is an effective and safe regimen. It appears to shorten the duration of induction to onset of labor.

Keywords: induction of labor, intrauterine fetal death, mifepristone, misoprostol

Procedia PDF Downloads 377
3307 Evaluation of the Effects of Some Medicinal Plants Extracts on Seed

Authors: Areej Ali Baeshen, Hanaa Kamal Galal, Batoul Mohamed Abdullatif

Abstract:

In the present study, the allelopathic effects of Eruca sativa, Mentha peprinta, and Coriandrum sativum aqueous extracts, prepared by 25 gm and 50 gm of fresh leaves dissolved in 100 ml of double distilled water in addition to the crude extract (100%). The final concentrations were 100 %, 50%, 25% and 0% as control. The extracts were tested for their allelopathic effects on seed germination and other growth parameters of Phaseolous vulgaris. Laboratory experiments were conducted in sterilizes Petri dishes with 5 and 10 day time interval for seed germination and 24 h, 48 h and 72 h for radicle length on an average of 25°C. The effects of different concentrations of aqueous extract were compared to distilled water (0%). 25% and 50% aqueous extracts of Eruca sativa and Coriandrum sativum caused a pronounced inhibitory effect on seed germination and the tested growth parameters of the receptor plant. The inhibitory effect was proportional to the concentration of the extract. Mentha peprinta extracts, on the other hand, caused an increase in germination percentage and other growth parameters in Phaseolous vulgaris. Hence, it could be concluded that the aqueous extracts of Eruca sativa and Coriandrum sativum might contain water-soluble allelochemicals, which could inhibit the seed germination and reduce radicle length of Phaseolous vulgaris. Mentha peprinta has beneficial allelopathic effects on the receptor plant.

Keywords: Phaseolus vulgaris, Eruca sativa, Mentha peperinta, Coriandrum sativum, medicinal plants, seed germination

Procedia PDF Downloads 406
3306 Normalized Difference Vegetation Index and Normalize Difference Chlorophyll Changes with Different Irrigation Levels on Sillage Corn

Authors: Cenk Aksit, Suleyman Kodal, Yusuf Ersoy Yildirim

Abstract:

Normalized Difference Vegetation Index (NDVI) is a widely used index in the world that provides reference information, such as the health status of the plant, and the density of the vegetation in a certain area, by making use of the electromagnetic radiation reflected from the plant surface. On the other hand, the chlorophyll index provides reference information about the chlorophyll density in the plant by making use of electromagnetic reflections at certain wavelengths. Chlorophyll concentration is higher in healthy plants and decreases as plant health decreases. This study, it was aimed to determine the changes in Normalize Difference Vegetation Index (NDVI) and Normalize Difference Chlorophyll (NDCI) of silage corn irrigated with subsurface drip irrigation systems under different irrigation levels. In 5 days irrigation interval, the daily potential plant water consumption values were collected, and the calculated amount was applied to the full irrigation and 3 irrigation water levels as irrigation water. The changes in NDVI and NDCI of silage corn irrigated with subsurface drip irrigation systems under different irrigation levels were determined. NDVI values have changed according to the amount of irrigation water applied, and the highest NDVI value has been reached in the subject where the most water is applied. Likewise, it was observed that the chlorophyll value decreased in direct proportion to the amount of irrigation water as the plant approached the harvest.

Keywords: NDVI, NDCI, sub-surface drip irrigation, silage corn, deficit irrigation

Procedia PDF Downloads 97
3305 Assessing Circularity Potentials and Customer Education to Drive Ecologically and Economically Effective Materials Design for Circular Economy - A Case Study

Authors: Mateusz Wielopolski, Asia Guerreschi

Abstract:

Circular Economy, as the counterargument to the ‘make-take-dispose’ linear model, is an approach that includes a variety of schools of thought looking at environmental, economic, and social sustainability. This, in turn, leads to a variety of strategies and often confusion when it comes to choosing the right one to make a circular transition as effective as possible. Due to the close interplay of circular product design, business model and social responsibility, companies often struggle to develop strategies that comply with all three triple-bottom-line criteria. Hence, to transition to circularity effectively, product design approaches must become more inclusive. In a case study conducted with the University of Bayreuth and the ISPO, we correlated aspects of material choice in product design, labeling and technological innovation with customer preferences and education about specific material and technology features. The study revealed those attributes of the consumers’ environmental awareness that directly translate into an increase of purchase power - primarily connected with individual preferences regarding sports activity and technical knowledge. Based on this outcome, we constituted a product development approach that incorporates the consumers’ individual preferences towards sustainable product features as well as their awareness about materials and technology. It allows deploying targeted customer education campaigns to raise the willingness to pay for sustainability. Next, we implemented the customer preference and education analysis into a circularity assessment tool that takes into account inherent company assets as well as subjective parameters like customer awareness. The outcome is a detailed but not cumbersome scoring system, which provides guidance for material and technology choices for circular product design while considering business model and communication strategy to the attentive customers. By including customer knowledge and complying with corresponding labels, companies develop more effective circular design strategies, while simultaneously increasing customers’ trust and loyalty.

Keywords: circularity, sustainability, product design, material choice, education, awareness, willingness to pay

Procedia PDF Downloads 200
3304 Guillain Barre Syndrome in Children

Authors: A. Erragh, K. Amanzoui, M. Elharit, H. Salem, M. Ababneh, K. Elfakhr, S. Kalouch, A. Chlilek

Abstract:

Guillain-Barre syndrome (GBS) is the most common form of acute polyradiculoneuritis (PRNA). It is a medical emergency in pediatrics that requires rapid diagnosis and immediate assessment of the severity criteria for the implementation of appropriate treatment. Retrospective, descriptive study in 24 patients under the age of 18 who presented with GBS between September 2017 and July 2021 and were hospitalized in the multipurpose pediatric intensive care unit of the Abderrahim EL Harouchi children's hospital in Casablanca. The average age was 7.91 years, with extremes ranging from 18 months and 14 years and a male predominance of 75%. After a prodromal event, most often infectious (80%) and a free interval of 12 days on average, 2 types of motor disorders begin either hypo or arereflectic flaccid paralysis of the lower limbs (45.8%) or flaccid quadriplegia hypo or arereflectic (54.2%). During GBS, the most formidable complication is respiratory distress, which can occur at any time. In our study, respiratory impairment was observed in 70.8% of cases. In addition, other signs of severity, such as swallowing disorders (75%) and dysautonomic disorders (8.33%), were also observed, which justified care in the intensive care unit for all of our patients. The use of invasive ventilation was necessary in 76.5% of cases, and specific treatments based on immunoglobulins were administered in all our patients. Despite everything, the death rate remains high (25%) and is mainly due to complications related to hospitalization. Guillain Barré syndrome is, therefore, a pediatric emergency that requires rapid diagnosis and immediate assessment of severity criteria for the implementation of appropriate treatment.

Keywords: guillain barre syndrome, emergency, children, medical

Procedia PDF Downloads 71
3303 A Political-Economic Analysis of Next Generation EU Recovery Fund

Authors: Fernando Martín-Espejo, Christophe Crombez

Abstract:

This paper presents a political-economic analysis of the reforms introduced during the coronavirus crisis at the EU level with a special emphasis on the recovery fund Next Generation EU (NGEU). It also introduces a spatial model to evaluate whether the governmental features of the recovery fund can be framed inside the community method. Particularly, by evaluating the brake clause in the NGEU legislation, this paper analyses theoretically the political and legislative implications of the introduction of flexibility clauses in the EU decision-making process.

Keywords: EU, legislative procedures, spatial model, coronavirus

Procedia PDF Downloads 177
3302 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Dutica Luminita

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: archetype, chronogenesis, concert, multiphonics

Procedia PDF Downloads 543
3301 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy

Authors: Okhee Woo, Hye Seon Shin

Abstract:

Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.

Keywords: DBT, breast cancer, stereotactic biopsy, mammography

Procedia PDF Downloads 304
3300 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 54
3299 Influence of Vibration Amplitude on Reaction Time and Drowsiness Level

Authors: Mohd A. Azizan, Mohd Z. Zali

Abstract:

It is well established that exposure to vibration has an adverse effect on human health, comfort, and performance. However, there is little quantitative knowledge on performance combined with drowsiness level during vibration exposure. This paper reports a study investigating the influence of vibration amplitude on seated occupant reaction time and drowsiness level. Eighteen male volunteers were recruited for this experiment. Before commencing the experiment, total transmitted acceleration measured at interfaces between the seat pan and seatback to human body was adjusted to become 0.2 ms-2 r.m.s and 0.4 ms-2 r.m.s for each volunteer. Seated volunteers were exposed to Gaussian random vibration with frequency band 1-15 Hz at two level of amplitude (low vibration amplitude and medium vibration amplitude) for 20-minutes in separate days. For the purpose of drowsiness measurement, volunteers were asked to complete 10-minutes PVT test before and after vibration exposure and rate their subjective drowsiness by giving score using Karolinska Sleepiness Scale (KSS) before vibration, every 5-minutes interval and following 20-minutes of vibration exposure. Strong evidence of drowsiness was found as there was a significant increase in reaction time and number of lapse following exposure to vibration in both conditions. However, the effect is more apparent in medium vibration amplitude. A steady increase of drowsiness level can also be observed in KSS in all volunteers. However, no significant differences were found in KSS between low vibration amplitude and medium vibration amplitude. It is concluded that exposure to vibration has an adverse effect on human alertness level and more pronounced at higher vibration amplitude. Taken together, these findings suggest a role of vibration in promoting drowsiness, especially at higher vibration amplitude.

Keywords: drowsiness, human vibration, karolinska sleepiness scale, psychomotor vigilance test

Procedia PDF Downloads 282
3298 Characterization of the Immune Response of Inactivated RVF Vaccine: A Comparative Study in Sheep and Goats as Experimental Model

Authors: Ahmed Zaghawa

Abstract:

Rift Valley Fever is an economically specific disease of the health and arboviral disease that affects many types of animals, causing significant economic losses in livestock, and it is transmitted to humans and has public health issues. The vaccine program is the backbone for the control of this disease. The goal of this study was to apply a new approach to evaluate the inactivated RVF vaccine developed in Egypt. In this study, the RVF vaccine was evaluated in young puppies and compared with sheep; the findings showed that young puppies were susceptible to infection with the inhibitory RVF virus and had a strong response of antibodies with two doses of the RVF vaccine within the two-week interval. The neutralization indices began to appear to the protective level on the 7th day at 1.35 and steadily elevated at 14,21 and 28 days to 1.35, 1.43, and 1.20, respectively, in comparison to the control group. While in sheep, the neutralization indices began to appear to the protective level on the 7th day at 1.10 and remain strongly at high titer at 14, 21, and 28 days with NI values 1.20, 1.50, and 1.50, respectively. The new approach for comparing the immune response in puppies and sheep via SNT indicated the high response in both species was evident as well as the neutralization indices values in young puppies at different periods after RVF vaccination reported the value of 1.08±0.03, 1.23±0.04, 1.30±0.03, and 1.45±0.02 after 7, 14, 21, and 28 days post-vaccination respectively. On the other side, a nearly similar immune response was noticed in sheep with NI values of 1.15±0.02, 1.27±0.02, 1.42±0.05, and 1.55±0.03 at 7, 14, 21, and 28 days post-vaccination, respectively. In conclusion, young puppies are similar to sheep in developing antibodies after vaccination with the RVF vaccine and can replace sheep for evaluating the efficacy of the RVF vaccine. Further studies are mandatory to assess more recent methods for evaluating inhibition of the RVF vaccine.

Keywords: immune response, puppies, RVF, sheep, vaccine

Procedia PDF Downloads 176
3297 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
3296 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 94
3295 Photocatalytic Eco-Active Ceramic Slabs to Abate Air Pollution under LED Light

Authors: Claudia L. Bianchi, Giuseppina Cerrato, Federico Galli, Federica Minozzi, Valentino Capucci

Abstract:

At the beginning of the industrial productions, porcelain gres tiles were considered as just a technical material, aesthetically not very beautiful. Today thanks to new industrial production methods, both properties, and beauty of these materials completely fit the market requests. In particular, the possibility to prepare slabs of large sizes is the new frontier of building materials. Beside these noteworthy architectural features, new surface properties have been introduced in the last generation of these materials. In particular, deposition of TiO₂ transforms the traditional ceramic into a photocatalytic eco-active material able to reduce polluting molecules present in air and water, to eliminate bacteria and to reduce the surface dirt thanks to the self-cleaning property. The problem of photocatalytic materials resides in the fact that it is necessary a UV light source to activate the oxidation processes on the surface of the material, processes that are turned off inexorably when the material is illuminated by LED lights and, even more so, when we are in darkness. First, it was necessary a thorough study change the existing plants to deposit the photocatalyst very evenly and this has been done thanks to the advent of digital printing and the development of an ink custom-made that stabilizes the powdered TiO₂ in its formulation. In addition, the commercial TiO₂, which is used for the traditional photocatalytic coating, has been doped with metals in order to activate it even in the visible region and thus in the presence of sunlight or LED. Thanks to this active coating, ceramic slabs are able to purify air eliminating odors and VOCs, and also can be cleaned with very soft detergents due to the self-cleaning properties given by the TiO₂ present at the ceramic surface. Moreover, the presence of dopant metals (patent WO2016157155) also allows the material to work as well as antibacterial in the dark, by eliminating one of the negative features of photocatalytic building materials that have so far limited its use on a large scale. Considering that we are constantly in contact with bacteria, some of which are dangerous for health. Active tiles are 99,99% efficient on all bacteria, from the most common such as Escherichia coli to the most dangerous such as Staphilococcus aureus Methicillin-resistant (MRSA). DIGITALIFE project LIFE13 ENV/IT/000140 – award for best project of October 2017.

Keywords: Ag-doped microsized TiO₂, eco-active ceramic, photocatalysis, digital coating

Procedia PDF Downloads 229
3294 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 334
3293 An Integration of Genetic Algorithm and Particle Swarm Optimization to Forecast Transport Energy Demand

Authors: N. R. Badurally Adam, S. R. Monebhurrun, M. Z. Dauhoo, A. Khoodaruth

Abstract:

Transport energy demand is vital for the economic growth of any country. Globalisation and better standard of living plays an important role in transport energy demand. Recently, transport energy demand in Mauritius has increased significantly, thus leading to an abuse of natural resources and thereby contributing to global warming. Forecasting the transport energy demand is therefore important for controlling and managing the demand. In this paper, we develop a model to predict the transport energy demand. The model developed is based on a system of five stochastic differential equations (SDEs) consisting of five endogenous variables: fuel price, population, gross domestic product (GDP), number of vehicles and transport energy demand and three exogenous parameters: crude birth rate, crude death rate and labour force. An interval of seven years is used to avoid any falsification of result since Mauritius is a developing country. Data available for Mauritius from year 2003 up to 2009 are used to obtain the values of design variables by applying genetic algorithm. The model is verified and validated for 2010 to 2012 by substituting the values of coefficients obtained by GA in the model and using particle swarm optimisation (PSO) to predict the values of the exogenous parameters. This model will help to control the transport energy demand in Mauritius which will in turn foster Mauritius towards a pollution-free country and decrease our dependence on fossil fuels.

Keywords: genetic algorithm, modeling, particle swarm optimization, stochastic differential equations, transport energy demand

Procedia PDF Downloads 369
3292 Wear Resistance and Mechanical Performance of Ultra-High Molecular Weight Polyethylene Influenced by Temperature Change

Authors: Juan Carlos Baena, Zhongxiao Peng

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) is extensively used in industrial and biomedical fields. The slippery nature of UHMWPE makes this material suitable for surface bearing applications, however, the operational conditions limit the lubrication efficiency, inducing boundary and mixed lubrication in the tribological system. The lack of lubrication in a tribological system intensifies friction, contact stress and consequently, operating temperature. With temperature increase, the material’s mechanical properties are affected, and the lifespan of the component is reduced. The understanding of how mechanical properties and wear performance of UHMWPE change when the temperature is increased has not been clearly identified. The understanding of the wear and mechanical performance of UHMWPE at different temperature is important to predict and further improve the lifespan of these components. This study evaluates the effects of temperature variation in a range of 20 °C to 60 °C on the hardness and the wear resistance of UHMWPE. A reduction of the hardness and wear resistance was observed with the increase in temperature. The variation of the wear rate increased 94.8% when the temperature changed from 20 °C to 50 °C. Although hardness is regarded to be an indicator of the material wear resistance, this study found that wear resistance decreased more rapidly than hardness with the temperature increase, evidencing a low material stability of this component in a short temperature interval. The reduction of the hardness was reflected by the plastic deformation and abrasion intensity, resulting in a significant wear rate increase.

Keywords: hardness, surface bearing, tribological system, UHMWPE, wear

Procedia PDF Downloads 271
3291 About the Number of Fundamental Physical Interactions

Authors: Andrey Angorsky

Abstract:

In the article an issue about the possible number of fundamental physical interactions is studied. The theory of similarity on the dimensionless quantity as the damping ratio serves as the instrument of analysis. The structure with the features of Higgs field comes out from non-commutative expression for this ratio. The experimentally checked up supposition about the nature of dark energy is spoken out.

Keywords: damping ratio, dark energy, dimensionless quantity, fundamental physical interactions, Higgs field, non-commutative expression

Procedia PDF Downloads 140
3290 Google Translate: AI Application

Authors: Shaima Almalhan, Lubna Shukri, Miriam Talal, Safaa Teskieh

Abstract:

Since artificial intelligence is a rapidly evolving topic that has had a significant impact on technical growth and innovation, this paper examines people's awareness, use, and engagement with the Google Translate application. To see how familiar aware users are with the app and its features, quantitative and qualitative research was conducted. The findings revealed that consumers have a high level of confidence in the application and how far people they benefit from this sort of innovation and how convenient it makes communication.

Keywords: artificial intelligence, google translate, speech recognition, language translation, camera translation, speech to text, text to speech

Procedia PDF Downloads 154
3289 Design of Broadband Power Divider for 3G and 4G Applications

Authors: A. M. El-Akhdar, A. M. El-Tager, H. M. El-Hennawy

Abstract:

This paper presents a broadband power divider with equal power division ratio. Two sections of transmission line transformers based on coupled microstrip lines are applied to obtain broadband performance. In addition, design methodology is proposed for the novel structure. A prototype is designed, simulated to operate in the band from 2.1 to 3.8 GHz to fulfill the requirements of 3G and 4G applications. The proposed structure features reduced size and less resistors than other conventional techniques. Simulation verifies the proposed idea and design methodology.

Keywords: power dividers, coupled lines, microstrip, 4G applications

Procedia PDF Downloads 477
3288 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 126
3287 Morphological and Elements Constituent Effects of Allelopathic Activity

Authors: Areej Ali Baeshen

Abstract:

Allelopathy is a complex phenomenon that depends on the concentration of allelochemicals. It has both inhibitory and stimulatory effects, which may be decided by concentration of allelochemicals present in extraction. In the present study, the allelopathic effects of Eruca sativa, Mentha peperina, and Coriandrum sativum water extract prepared by grinding fresh leaves of the medicinal plants in distilled water and three concentrations were taken from the crude extracts (100%, 50% and 25% in addition to 0% as control), and were tested for their effects on seed germination and some growth parameters of Zea mays. The experiment was conducted in sterilized Petri dishes under the natural laboratory conditions at temperature of 25°C, with a 24 h, 48 h, 72 h, 96 h and 120 h time interval for seed germination and 24 h, 48 h and 72 h for radicle length. The effects of different concentrations of aqueous extract were compared to distilled water (control, 0%). In maize, germination percentage was suppressed when plants was treated with 100% extracts, however, 50% and 25% of M. peprina increased germination percentage by 4 times more than the control. Moreover, 50% and 25% extracts of M. peperina and 50% of C. sativum increased maize radicle and plumule length by 3 to 4 times that of the control. Results of plumule fresh and dry weights revealed that concentrations of water extracts of 100% and 50% M. peperina, E. sativa 100% and E. sativa 50% reported almost similar plumule fresh weight as in control plants. The most interesting finding is the reduction in harmful salts and TDS which could be a good factor in saline soils of Saudi Arabia.

Keywords: Zea mays, Eruca sativa, Mentha peperina, Coriandrum sativum, medicinal plants, allelochemicals, aqueous extract

Procedia PDF Downloads 297
3286 Association of Alcohol Consumption with Active Tuberculosis in Taiwanese Adults: A Nationwide Population-Based Cohort Study

Authors: Yung-Feng Yen, Yun-Ju Lai

Abstract:

Background: Animal studies have shown that alcohol exposure may cause immunosuppression and increase the susceptibility to tuberculosis (TB) infection. However, the temporality of alcohol consumption with subsequent TB development remains unclear. This nationwide population-based cohort study aimed to investigate the impact of alcohol exposure on TB development in Taiwanese adults. Methods: We included 46 196 adult participants from three rounds (2001, 2005, 2009) of the Taiwan National Health Interview Survey. Alcohol consumption was classified into heavy, regular, social, or never alcohol use. Heavy alcohol consumption was defined as intoxication at least once/week. Alcohol consumption and other covariates were collected by in-person interviews at baseline. Incident cases of active TB were identified from the National Health Insurance database. Multivariate logistic regression was used to estimate the association between alcohol consumption and active TB, with adjustment for age, sex, smoking, socioeconomic status, and other covariates. Results: A total of 279 new cases of active TB occurred during the study follow-up period. Heavy (adjusted odds ratio [AOR], 5.21; 95% confident interval [CI], 2.41-11.26) and regular alcohol use (AOR, 1.73; 95% CI, 1.26-2.38) were associated with higher risks of incident TB after adjusting for the subject demographics and comorbidities. Moreover, a strong dose-response effect was observed between increasing alcohol consumption and incident TB (AOR, 2.26; 95% CI, 1.59-3.21; P <.001). Conclusion: Heavy and regular alcohol consumption were associated with higher risks of active TB. Future TB control programs should consider strategies to lower the overall level of alcohol consumption to reduce the TB disease burden.

Keywords: alcohol consumption, tuberculosis, risk factor, cohort study

Procedia PDF Downloads 226
3285 Artificial Intelligence and Development: The Missing Link

Authors: Driss Kettani

Abstract:

ICT4D actors are naturally attempted to include AI in the range of enabling technologies and tools that could support and boost the Development process, and to refer to these as AI4D. But, doing so, assumes that AI complies with the very specific features of ICT4D context, including, among others, affordability, relevance, openness, and ownership. Clearly, none of these is fulfilled, and the enthusiastic posture that AI4D is a natural part of ICT4D is not grounded and, to certain extent, does not serve the purpose of Technology for Development at all. In the context of Development, it is important to emphasize and prioritize ICT4D, in the national digital transformation strategies, instead of borrowing "trendy" waves of the IT Industry that are motivated by business considerations, with no specific care/consideration to Development.

Keywords: AI, ICT4D, technology for development, position paper

Procedia PDF Downloads 88
3284 Modification of the Risk for Incident Cancer with Changes in the Metabolic Syndrome Status: A Prospective Cohort Study in Taiwan

Authors: Yung-Feng Yen, Yun-Ju Lai

Abstract:

Background: Metabolic syndrome (MetS) is reversible; however, the effect of changes in MetS status on the risk of incident cancer has not been extensively studied. We aimed to investigate the effects of changes in MetS status on incident cancer risk. Methods: This prospective, longitudinal study used data from Taiwan’s MJ cohort of 157,915 adults recruited from 2002–2016 who had repeated MetS measurements 5.2 (±3.5) years apart and were followed up for the new onset of cancer over 8.2 (±4.5) years. A new diagnosis of incident cancer in study individuals was confirmed by their pathohistological reports. The participants’ MetS status included MetS-free (n=119,331), MetS-developed (n=14,272), MetS-recovered (n=7,914), and MetS-persistent (n=16,398). We used the Fine-Gray sub-distribution method, with death as the competing risk, to determine the association between MetS changes and the risk of incident cancer. Results: During the follow-up period, 7,486 individuals had new development of cancer. Compared with the MetS-free group, MetS-persistent individuals had a significantly higher risk of incident cancer (adjusted hazard ratio [aHR], 1.10; 95% confidence interval [CI], 1.03-1.18). Considering the effect of dynamic changes in MetS status on the risk of specific cancer types, MetS persistence was significantly associated with a higher risk of incident colon and rectum, kidney, pancreas, uterus, and thyroid cancer. The risk of kidney, uterus, and thyroid cancer in MetS-recovered individuals was higher than in those who remained MetS but lower than MetS-persistent individuals. Conclusions: Persistent MetS is associated with a higher risk of incident cancer, and recovery from MetS may reduce the risk. The findings of our study suggest that it is imperative for individuals with pre-existing MetS to seek treatment for this condition to reduce the cancer risk.

Keywords: metabolic syndrome change, cancer, risk factor, cohort study

Procedia PDF Downloads 78
3283 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea

Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar

Abstract:

This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.

Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter

Procedia PDF Downloads 133
3282 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 72
3281 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 183
3280 Memory and Narratives Rereading before and after One Week

Authors: Abigail M. Csik, Gabriel A. Radvansky

Abstract:

As people read through event-based narratives, they construct an event model that captures information about the characters, goals, location, time, and causality. For many reasons, memory for such narratives is represented at different levels, namely, the surface form, textbase, and event model levels. Rereading has been shown to decrease surface form memory, while, at the same time, increasing textbase and event model memories. More generally, distributed practice has consistently shown memory benefits over massed practice for different types of materials, including texts. However, little research has investigated distributed practice of narratives at different inter-study intervals and these effects on these three levels of memory. Recent work in our lab has indicated that there may be dramatic changes in patterns of forgetting around one week, which may affect the three levels of memory. The present experiment aimed to determine the effects of rereading on the three levels of memory as a factor of whether the texts were reread before versus after one week. Participants (N = 42) read a set of stories, re-read them either before or after one week (with an inter-study interval of three days, seven days, or fourteen days), and then took a recognition test, from which the three levels of representation were derived. Signal detection results from this study reveal that differential patterns at the three levels as a factor of whether the narratives were re-read prior to one week or after one week. In particular, an ANOVA revealed that surface form memory was lower (p = .08) while textbase (p = .02) and event model memory (p = .04) were greater if narratives were re-read 14 days later compared to memory when narratives were re-read 3 days later. These results have implications for what type of memory benefits from distributed practice at various inter-study intervals.

Keywords: memory, event cognition, distributed practice, consolidation

Procedia PDF Downloads 225