Search results for: SPM (Single Point Mooring)
3505 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs
Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare
Abstract:
The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio
Procedia PDF Downloads 983504 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology
Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões
Abstract:
This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.Keywords: fruit thinning, horticultural field, portable devices, sensor technologies
Procedia PDF Downloads 1393503 Return on Investment of a VFD Drive for Centrifugal Pump
Authors: Benhaddadi M., Déry D.
Abstract:
Electric motors are the single biggest consumer of electricity, and the consumption will have more than to double by 2050. Meanwhile, the existing technologies offer the potential to reduce the motor energy demand by up to 30 %, whereas the know-how to realise energy savings is not extensively applied. That is why the authors first conducted a detailed analysis of the regulation of the electric motor market in North America To illustrate the colossal energy savings potential permitted by the VFD, the authors have equipped experimental setup, based on centrifugal pump, simultaneously equipped with regulating throttle valves and variable frequency drive VFD. The obtained experimental results for 1.5 HP motor pump are extended to another motor powers, as centrifugal pumps that are different in power may have similar operational characteristics if they are located in a similar kind of process, permitting the simulations for 5 HP and 100 HP motors. According to the obtained results, VFDs tend to be most cost-effective when fitted to larger motor pumps, in addition to higher duty cycle of the motor and relative time operating at lower than full load. The energy saving permitted by the VFD use is huge, and the payback period for drive investment is short. Nonetheless, it’s important to highlight that there is no general rule of thumb that can be used to obtain the impact of the relative time operating at lower than full load. Indeed, in terms of energy-saving differences, 50 % flow regulation is tremendously better than 75 % regulation, but a slightly enhanced relative to 25 %. Two main distinct reasons can explain this somewhat not anticipated results: the characteristics of the process and the drop in efficiency when motor is operating at low speed.Keywords: motor, drive, energy efficiency, centrifugal pump
Procedia PDF Downloads 733502 Efficacy of Social-emotional Learning Programs Amongst First-generation Immigrant Children in Canada and The United States- A Scoping Review
Authors: Maria Gabrielle "Abby" Dalmacio
Abstract:
Social-emotional learning is a concept that is garnering more importance when considering the development of young children. The aim of this scoping literature review is to explore the implementation of social-emotional learning programs conducted with first-generation immigrant young children ages 3-12 years in North America. This review of literature focuses on social-emotional learning programs taking place in early childhood education centres and elementary school settings that include the first-generation immigrant children population to determine if and how their understanding of social-emotional learning skills may be impacted by the curriculum being taught through North American educational pedagogy. Research on early childhood education and social-emotional learning reveals the lack of inter-cultural adaptability in social emotional learning programs and the potential for immigrant children as being assessed as developmentally delayed due to programs being conducted through standardized North American curricula. The results of this review point to a need for more research to be conducted with first-generation immigrant children to help reform social-emotional learning programs to be conducive for each child’s individual development. There remains to be a gap of knowledge in the current literature on social-emotional learning programs and how educators can effectively incorporate the intercultural perspectives of first-generation immigrant children in early childhood education.Keywords: early childhood education, social-emotional learning, first-generation immigrant children, north america, inter-cultural perspectives, cultural diversity, early educational frameworks
Procedia PDF Downloads 1013501 Developing a Model for the Relation between Heritage and Place Identity
Authors: A. Arjomand Kermani, N. Charbgoo, M. Alalhesabi
Abstract:
In the situation of great acceleration of changes and the need for new developments in the cities on one hand and conservation and regeneration approaches on the other hand, place identity and its relation with heritage context have taken on new importance. This relation is generally mutual and complex one. The significant point in this relation is that the process of identifying something as heritage rather than just historical phenomena, brings that which may be inherited into the realm of identity. In planning and urban design as well as environmental psychology and phenomenology domain, place identity and its attributes and components were studied and discussed. However, the relation between physical environment (especially heritage) and identity has been neglected in the planning literature. This article aims to review the knowledge on this field and develop a model on the influence and relation of these two major concepts (heritage and identity). To build this conceptual model, we draw on available literature in environmental psychology as well as planning on place identity and heritage environment using a descriptive-analytical methodology to understand how they can inform the planning strategies and governance policies. A cross-disciplinary analysis is essential to understand the nature of place identity and heritage context and develop a more holistic model of their relationship in order to be employed in planning process and decision making. Moreover, this broader and more holistic perspective would enable both social scientists and planners to learn from one another’s expertise for a fuller understanding of community dynamics. The result indicates that a combination of these perspectives can provide a richer understanding—not only of how planning impacts our experience of place, but also how place identity can impact community planning and development.Keywords: heritage, inter-disciplinary study, place identity, planning
Procedia PDF Downloads 4243500 Post Covid-19 Landscape of Global Pharmaceutical Industry
Authors: Abu Zafor Sadek
Abstract:
Pharmaceuticals were one of the least impacted business sectors during the corona pandemic as they are the center point of Covid-19 fight. Emergency use authorization, unproven indication of some commonly used drugs, self-medication, research and production capacity of an individual country, capacity of producing vaccine by many countries, Active Pharmaceutical Ingredients (APIs) related uncertainty, information gap among manufacturer, practitioners and user, export restriction, duration of lock-down, lack of harmony in transportation, disruption in the regulatory approval process, sudden increased demand of hospital items and protective equipment, panic buying, difficulties in in-person product promotion, e-prescription, geo-politics and associated issues added a new dimension to this industry. Although the industry maintains a reasonable growth throughout Covid-19 days; however, it has been characterized by both long- and short-term effects. Short-term effects have already been visible to so many countries, especially those who are import-dependent and have limited research capacity. On the other hand, it will take a few more time to see the long-term effects. Nevertheless, supply chain disruption, changes in strategic planning, new communication model, squeezing of job opportunity, rapid digitalization are the major short-term effects, whereas long-term effects include a shift towards self-sufficiency, growth pattern changes of certain products, special attention towards clinical studies, automation in operations, the increased arena of ethical issues etc. Therefore, this qualitative and exploratory study identifies the post-covid-19 landscape of the global pharmaceutical industry.Keywords: covid-19, pharmaceutical, businees, landscape
Procedia PDF Downloads 923499 Impacts of Nomophobia on Daily Performance: Validity, Reliability and Prevalence Estimates among Undergraduate Dental Students in Bhubaneswar, India
Authors: Ramesh Nagarajappa, Upasana Mohapatra
Abstract:
Considered a modern phobia, Nomophobia (NO MObile PHOne PhoBIA) is a term that describes the irrational fear or anxiety of being unable to access one’s own mobile phone. Objectives: To develop and validate the nomophobia questionnaire, administering it to a sample of adolescents representing undergraduate dental students. To assess the prevalence of Nomophobia, determine the usage pattern of mobile phones, and evaluate the impact due to lack of access to mobile phones among undergraduate dental students. Methodology: A cross-sectional study was conducted on 302 undergraduate students at Bhubaneswar through a self-administered questionnaire via Google Forms consisting of 19 items evaluating the pattern and anxiety related to usage of mobile phones. Responses were recorded on a 5-point Likert scale. Kruskal Wallis, Mann-Whitney U, and Chi-square tests were used for statistical analysis. Results: Test-Retest reliability showed kappa of k=0.86 and Internal consistency Chronbach’s-Alpha to be α=0.82. Prevalence of nomophobia (score ≥ 58) was 32.1%, and students at risk of being nomophobic (score 39-57) was 61.9%. It was highest in males (32.6%) and amongst the interns (41.9%) and lowest (25.5%) amongst the second-year students. Participants felt nervous/insecure if their phones were away from them because of the fear that somebody might have accessed their data (3.07±1.93) and or tried to contact them (3.09±1.13), which were not statistically significant (p>0.05). Conclusions: Effect of mobile phone on dental students and the fear of not having their phones with them is increasing elaborately, that needs to be controlled, which if not achieved, would negatively hamper their academic performance and their being in the society.Keywords: addiction, dental students, mobile phone, nomophobia
Procedia PDF Downloads 1503498 Cost-Effectiveness of Forest Restoration in Nepal: A Case from Leasehold Forestry Initiatives
Authors: Sony Baral, Bijendra Basnyat, Kalyan Gauli
Abstract:
Forests are depleted throughout the world in the 1990s, and since then, various efforts have been undertaken for the restoration of the forest. A government of Nepal promoted various community based forest management in which leasehold forestry was the one introduce in 1990s, aiming to restore degraded forests land. However, few attempts have been made to systematically evaluate its cost effectiveness. Hence the study assesses the cost effectiveness of leasehold forestry intervention in the mid-hill district of Nepal following the cost and benefit analysis approach. The study followed quasi-experimental design and collected costs and benefits information from 320 leasehold forestry groups (with intervention) and 154 comparison groups (without intervention) through household survey, forest inventory and then validated with the stakeholders’ consultative workshop. The study found that both the benefits and costs from intervention outweighed without situation. The members of leasehold forestry groups were generating multiple benefits from the forests, such as firewood, grasses, fodder, and fruits, whereas those from comparison groups were mostly getting a single benefit. Likewise, extent of soil carbon is high in leasehold forests. Average expense per unit area is high in intervention sites due to high government investment for capacity building. Nevertheless, positive net present value and internal rate of return was observed for both situations. However, net present value from intervention, i.e., leasehold forestry, is almost double compared to comparison sites, revealing that community are getting higher benefits from restoration. The study concludes that leasehold forestry is a highly cost-effective intervention that contributes towards forest restoration that brings multiple benefits to rural poor.Keywords: cost effectiveness, economic efficiency, intervention, restoration, leasehold forestry, nepal
Procedia PDF Downloads 1003497 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy
Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan
Abstract:
For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue
Procedia PDF Downloads 3623496 Fabrication of Nanoengineered Radiation Shielding Multifunctional Polymeric Sandwich Composites
Authors: Nasim Abuali Galehdari, Venkat Mani, Ajit D. Kelkar
Abstract:
Space Radiation has become one of the major factors in successful long duration space exploration. Exposure to space radiation not only can affect the health of astronauts but also can disrupt or damage materials and electronics. Hazards to materials include degradation of properties, such as, modulus, strength, or glass transition temperature. Electronics may experience single event effects, gate rupture, burnout of field effect transistors and noise. Presently aluminum is the major component in most of the space structures due to its lightweight and good structural properties. However, aluminum is ineffective at blocking space radiation. Therefore, most of the past research involved studying at polymers which contain large amounts of hydrogen. Again, these materials are not structural materials and would require large amounts of material to achieve the structural properties needed. One of the materials to alleviate this problem is polymeric composite materials, which has good structural properties and use polymers that contained large amounts of hydrogen. This paper presents steps involved in fabrication of multi-functional hybrid sandwich panels that can provide beneficial radiation shielding as well as structural strength. Multifunctional hybrid sandwich panels were manufactured using vacuum assisted resin transfer molding process and were subjected to radiation treatment. Study indicates that various nanoparticles including Boron Nano powder, Boron Carbide and Gadolinium nanoparticles can be successfully used to block the space radiation without sacrificing the structural integrity.Keywords: multi-functional, polymer composites, radiation shielding, sandwich composites
Procedia PDF Downloads 2863495 Hybrid Capture Resolves the Phylogeny of the Pantropically Distributed Zanthoxylum (Rutaceae) and Reveals an Old World Origin
Authors: Lee Ping Ang, Salvatore Tomasello, Jun Wen, Marc S. Appelhans
Abstract:
With about 225 species, Zanthoxylum L. is the second most species rich genus in Rutaceae. It is the only genus with a pantropical distribution. Economically, it is used in several Asian countries as traditional medicine and spice. In the past Zanthoxylum was divided into two genera, the temperate Zanthoxylum sensu strictu (s.s.) and the (sub)tropical Fagara, due to the large differences in flower morphology: heterochlamydeous in Fagara and homochlamydeous in Zanthoxylum s.s.. This genus is much under studied and previous phylogenetic studies using Sanger sequencing did not resolve the relationships sufficiently. In this study, we use Hybrid Capture with a specially designed bait set for Zanthoxylum to sequence 347 putatively single-copy genes. The taxon sampling has been largely improved as compared to previous studies and the preliminary results will be based on 371 specimens representing 133 species from all continents and major island groups. Our preliminary results reveal similar tree topology as the previous studies while providing more details to the backbone of the phylogeny. The phylogenetic tree consists of four main clades: A) African/Malagasy clade, B) Z. asiaticum clade - a clade consisting widespread species occurring in (sub)tropical Asia and Africa as well as Madagascar, C) Asian/Pacific clade and D) American clade, which also includes the temperate Asian species. The merging of Fagara and Zanthoxylum is supported by our results and the homochlamydeous flowers of Zanthoxylum s.s. are likely derived from heterochlamydeous flowers. Several of the morphologically defined sections within Zanthoxylum are not monophyletic. The study dissemination will (1) introduce the framework of this project; (2) present preliminary results and (3) the ongoing progress of the study.Keywords: Zanthoxylum, phylogenomic, hybrid capture, pantropical
Procedia PDF Downloads 723494 High-Frequency Modulation of Light-Emitting Diodes for New Ultraviolet Communications
Authors: Meng-Chyi Wu, Bonn Lin, Jyun-Hao Liao, Chein-Ju Chen, Yu-Cheng Jhuang, Mau-Phon Houng, Fang-Hsing Wang, Min-Chu Liu, Cheng-Fu Yang, Cheng-Shong Hong
Abstract:
Since the use of wireless communications has become critical nowadays, the available RF spectrum has become limited. Ultraviolet (UV) communication system can alleviate the spectrum constraint making UV communication system a potential alternative to future communication demands. Also, UV links can provide faster communication rate and can be used in combination with existing RF communication links, providing new communications diversity with higher user capacity. The UV region of electromagnetic spectrum has been of interest to detector, imaging and communication technologies because the stratospheric ozone layer effectively absorbs some solar UV radiation from reaching the earth surface. The wavebands where most of UV radiation is absorbed by the ozone are commonly known as the solar blind region. By operating in UV-C band (200-280 nm) the communication system can minimize the transmission power consumption since it will have less radiation noise. UV communication uses the UV ray as the medium. Electric signal is carried on this band after being modulated and then be transmitted within the atmosphere as channel. Though the background noise of UV-C communication is very low owing to the solar-blind feature, it leads to a large propagation loss. The 370 nm UV provides a much lower propagation loss than that the UV-C does and the recent device technology for UV source on this band is more mature. The fabricated 370 nm AlGaN light-emitting diodes (LEDs) with an aperture size of 45 m exhibit a modulation bandwidth of 165 MHz at 30 mA and a high power of 7 W/cm2 at 230 A/cm2. In order to solve the problem of low power in single UV LED, a UV LED array is presented in.Keywords: ultraviolet (UV) communication, light-emitting diodes (LEDs), modulation bandwidth, LED array, 370 nm
Procedia PDF Downloads 4143493 Social Media Marketing and Blog Usage in Business Schools: An Exploratory Study
Authors: Grzegorz Mazurek, Michal Kucia
Abstract:
The following study of a preliminary character, presents a first step of multifaceted study on the usage of social media in HEIs. It examines a significance, potential, and managerial implications of social media marketing and blogs usage in HEIs – namely in the sphere of business schools. Social media – particularly: blogging and virtual platforms such as Facebook, Twitter or Instagram have been covered at length in publications of both theoretical and practical nature as of late. Still, the amount of information related to the framework of application of social media in HEIs is rather limited. A pre-designed observation matrix has been used to collect primary data found at websites of different HEIs and to include blog observations. Additionally, a pilot study based on on-line questionnaires with marketing officers of HEI schools has been conducted. The main aim of the study was to identify and elaborate on matters like the scope of social media usage (and blogs in particular) in practice, recognition of the functions fulfilled by social media and blogs, or the anticipated potential of social media for HEIs. The study reveals that the majority of business schools highly ranked in Financial Times rankings use social media and interactive functionalities of their web sites, however, mostly for promotional reasons, and they are targeted at new students. The usage of blogs, though, is not so common and in most cases, blogs are independent platforms, not managed but supported by organizations. Managers and specialists point to lack of resources, insufficient users’ engagement and lack of strategic approach to social media as the main reasons of not advancing in the usage of blogs and social media platforms.Keywords: blogs, social media marketing, higher education institutions, business schools, value co-creation
Procedia PDF Downloads 2653492 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 203491 Students' Perspectives on Quality of Course Evaluation Practices and Feedbacks in Eritrea
Authors: Ermias Melake Tesfay
Abstract:
The importance of evaluation practice and feedback to student advancement and retention has gained importance in the literature over the past ten years. So many issues and cases have been raised about the quality and types of evaluation carried out in higher education and the quality and quantity of student feedback. The aim of this study was to explore the students’ perspectives on the quality of course evaluation practice and feedback in College of Education and College of Science. The study used both quantitative and qualitative methods to collect data. Data were collected from third-year and fourth-year students of 13 departments in the College of Education and College of Science in Eritrea. A modified Service Performance (SERVPERF) questionnaire and focus group discussions were used to collect the data. The sample population comprised of 135 third-year and fourth-year students’ from both Colleges. A questionnaire using a 5 point Likert-scale was administered to all respondents whilst two focus group discussions were conducted. Findings from survey data and focus group discussions showed that the majority of students hold a positive perception of the quality of course evaluation practice but had a negative perception of methods of awarding grades and administrators’ role in listening to the students complain about the course. Furthermore, the analysis from the questionnaire showed that there is no statistically significant difference between third-year and fourth-year students, College of Education and College of Science and male and female students on the quality of course evaluation practice and feedback. The study recommends that colleges improve the quality of fairness and feedback during course assessment.Keywords: evaluation, feedback, quality, students' perception
Procedia PDF Downloads 1583490 Performance and Emission Characteristics of Spark Ignition Engine Running with Gasoline, Blends of Ethanol, and Blends of Ethiopian Arekie
Authors: Mengistu Gizaw Gawo, Bisrat Yoseph Gebrehiwot
Abstract:
Petroleum fuels have become a threat to the world because of their toxic emissions. Besides, it is unknown how long they will last. The only known fact is that they are depleting rapidly and will not last long. So the world’s concern about finding environmentally friendly alternative fuels has increased recently. Hence alcohol fuels are found to be the most convenient alternatives to use in internal combustion engines. This research intends to introduce Ethiopian locally produced alcohol as an alternative in the blended form with gasoline to use in spark ignition engines. The traditionally distilled Arekie was purchased from a local producer and purified using fractional distillation. Then five Arekie-gasoline blends were prepared with the proportion of 5,10,15,20 and 25%v/v (A5, A10, A15, A20, and A25, respectively). Also, absolute ethanol was purchased from a local supplier, and ethanol-gasoline blends were prepared with a similar proportion as Arekie-gasoline blends (E5, E10, E15, E20, and E25). Then an experiment was conducted on a single-cylinder, 4-stroke, spark-ignition engine running at a constant speed of 2500 rpm and variable loads to investigate the performance and emission characteristics. Results showed that the performance and emission parameters are significantly improved as the ratio of Arekie and ethanol in gasoline increases at all loads. Among all tested fuels, E20 exhibited better performance, and E25 exhibited better emission. A20 provided a slightly lower performance than E20 but was much improved compared to pure gasoline. A25 provided comparable emissions with E25 and was much better than pure gasoline. Generally, adding up to 20%v/v Ethiopian Arekie in gasoline could make a better, renewable alternative to spark ignition engines.Keywords: alcohol fuels, alternative fuels, pollutant emissions, spark-ignition engine, Arekie-gasoline blends
Procedia PDF Downloads 1193489 Removal of Pharmaceuticals from Aquarius Solutions Using Hybrid Ceramic Membranes
Authors: Jenny Radeva, Anke-Gundula Roth, Christian Goebbert, Robert Niestroj-Pahl, Lars Daehne, Axel Wolfram, Juergen Wiese
Abstract:
The technological advantages of ceramic filtration elements were combined with polyelectrolyte films in the development process of hybrid membrane for the elimination of pharmaceuticals from Aquarius solutions. Previously extruded alumina ceramic membranes were coated with nanosized polyelectrolyte films using Layer-by-Layer technology. The polyelectrolyte chains form a network with nano-pores on the ceramic surface and promote the retention of small molecules like pharmaceuticals and microplastics, which cannot be eliminated using standard ultrafiltration methods. Additionally, the polyelectrolyte coat contributes with its adjustable (based on application) Zeta Potential for repulsion of contaminant molecules with opposite charges. Properties like permeability, bubble point, pore size distribution and Zeta Potential of ceramic and hybrid membranes were characterized using various laboratory and pilot tests and compared with each other. The most significant role for the membrane characterization played the filtration behavior investigation, during which retention against widely used pharmaceuticals like Diclofenac, Ibuprofen and Sulfamethoxazol was subjected to series of filtration tests. The presented study offers a new perspective on nanosized molecules removal from aqueous solutions and shows the importance of combined techniques application for the elimination of pharmaceutical contaminants from drinking water.Keywords: water treatment, hybrid membranes, layer-by-layer coating, filtration, polyelectrolytes
Procedia PDF Downloads 1673488 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data
Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis
Abstract:
Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction
Procedia PDF Downloads 5893487 Microscopic and Mesoscopic Deformation Behaviors of Mg-2Gd Alloy with or without Li Addition
Authors: Jing Li, Li Jin, Fulin Wang, Jie Dong, Wenjiang Ding
Abstract:
Mg-Li dual-phase alloy exhibits better combination of yield strength and elongation than the Mg single-phase alloy. To exploit its deformation behavior, the deformation mechanisms of Mg-2Gd alloy with or without Li addition, i.e., Mg-6Li-2Gd and Mg-2Gd alloy, have been studied at both microscale and mesoscale. EBSD-assisted slip trace, twin trace, and texture evolution analysis show that the α-Mg phase of Mg-6Li-2Gd alloy exhibits different microscopic deformation mechanisms with the Mg-2Gd alloy, i.e., mainly prismatic slip in the former one, while basal slip, prismatic slip and extension twin in the latter one. Further Schmid factor analysis results attribute this different intra-phase deformation mechanisms to the higher critical resolved shear stress (CRSS) value of extension twin and lower ratio of CRSSprismatic /CRSSbasal in the α-Mg phase of Mg-6Li-2Gd alloy. Additionally, Li addition can induce dual-phase microstructure in the Mg-6Li-2Gd alloy, leading to the formation of hetero-deformation induced (HDI) stress at the mesoscale. This can be evidenced by the hysteresis loops appearing during the loading-unloading-reloading (LUR) tensile tests and the activation of multiple slip activity in the α-Mg phase neighboring β-Li phase. The Mg-6Li-2Gd alloy shows higher yield strength is due to the harder α-Mg phase arising from solid solution hardening of Li addition, as well asthe strengthening of soft β-Li phase by the HDI stress during yield stage. Since the strain hardening rate of Mg-6Li-2Gd alloy is lower than that of Mg-2Gd alloy after ~2% strain, which is partly due to the weak contribution of HDI stress, Mg-6Li-2Gd alloy shows no obvious increase of uniform elongation than the Mg-2Gd alloy.But since the β-Li phase is effective in blunting the crack tips, the Mg-6Li-2Gd alloy shows ununiform elongation, which, thus, leads to the higher total elongation than the Mg-2Gd alloy.Keywords: Mg-Li-Gd dual-phase alloy, phase boundary, HDI stress, dislocation slip activity, mechanical properties
Procedia PDF Downloads 2043486 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy
Authors: Okhee Woo, Hye Seon Shin
Abstract:
Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.Keywords: DBT, breast cancer, stereotactic biopsy, mammography
Procedia PDF Downloads 3043485 Alumina Supported Copper-manganese Catalysts for Combustion of Exhaust Gases: Catalysts Characterization
Authors: Krasimir I. Ivanov, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Georgi V. Avdeev, Tatyana T. Tabakova
Abstract:
In recent research copper and manganese systems were found to be the most active in CO and organic compounds oxidation among the base catalysts. The mixed copper manganese oxide has been widely studied in oxidation reactions because of their higher activity at low temperatures in comparison with single oxide catalysts. The results showed that the formation of spinel CuxMn3−xO4 in the oxidized catalyst is responsible for the activity even at room temperature. That is why most of the investigations are focused on the hopcalite catalyst (CuMn2O4) as the best copper-manganese catalyst. Now it’s known that this is true only for CO oxidation, but not for mixture of CO and VOCs. The purpose of this study is to investigate the alumina supported copper-manganese catalysts with different Cu/Mn molar ratio in terms of oxidation of CO, methanol and dimethyl ether. The catalysts were prepared by impregnation of γ-Al2O3 with copper and manganese nitrates and the catalytic activity measurements were carried out in continuous flow equipment with a four-channel isothermal stainless steel reactor. Gas mixtures on the input and output of the reactor were analyzed with a gas chromatograph, equipped with FID and TCD detectors. The texture characteristics were determined by low-temperature (- 196 oС) nitrogen adsorption in a Quantachrome Instruments NOVA 1200e (USA) specific surface area&pore analyzer. Thermal, XRD and TPR analyses were performed. It was established that the active component of the mixed Cu-Mn/γ–alumina catalysts strongly depends on the Cu/Mn molar ratio. Highly active alumina supported Cu-Mn catalysts for CO, methanol and DME oxidation were synthesized. While the hopcalite is the best catalyst for CO oxidation, the best compromise for simultaneous oxidation of all components is the catalyst with Cu/Mn molar ratio 1:5.Keywords: supported copper-manganese catalysts, CO, VOCs oxidation, combustion of exhaust gases
Procedia PDF Downloads 2863484 A Critical Appraisal of the Philosophy of University and Its Debates: The Creation of Disempowered Youth in the Ethiopian Education Sector
Authors: Sisaye Tamrat Ayalew
Abstract:
This paper focuses on the educational philosophy of universities in Ethiopia and the debates surrounding it. It highlights the contradictory views on the role of universities, with some perceiving them as practical problem-solving institutions and others emphasizing the production and dissemination of knowledge. The aim of this study is to critically explore the debates around the educational philosophy of universities in Ethiopia. It also seeks to examine how the understanding of this philosophy contributes to the marginalization of youth in the country. This research adopts a phenomenological qualitative research design. It aims to understand the impact of socio-economic and political factors on university education and how youth from disadvantaged backgrounds experience marginalization in the job market. The study reveals that the understanding of educational philosophy varies across different contexts and over time. In the Ethiopian context, the philosophy of universities lacks a disinterested pursuit of knowledge and instrumentalist epistemology. Instead, it oversimplifies the philosophy to the point of devaluing knowledge and treating certificates as commodities, even in the absence of formal training. In conclusion, this research highlights the need for a critical appraisal of the educational philosophy of universities in Ethiopia. It emphasizes the negative impact of an oversimplified and commodified approach to knowledge on the empowerment of youth. By bringing attention to these issues, this study contributes to the broader understanding of the role of universities in society and calls for reforms in the Ethiopian education sector to promote empowerment rather than disempowerment.Keywords: philosophy of universities, marginalized youth, diploma mill, instrumentalist epistemology, disinterested pursuit
Procedia PDF Downloads 763483 Cows Milk Quality on Different Sized Dairy Farms
Authors: Ramutė Miseikienė, Saulius Tusas
Abstract:
Somatic cell count and bacteria count are the main indicators of cow milk quality. The aim of this study was to analyze and compare parameters of milk quality in different-sized cows herds. Milk quality of ten dairy cows farms during one year period was analyzed. Dairy farms were divided into five groups according to number of cows in the farm (under 50 cows, 51–100 cows, 101–200 cows, 201–400 cows and more than 400 cows). The averages of somatic cells bacteria count in milk and milk freezing temperature were analyzed. Also, these parameters of milk quality were compared during outdoor (from May to September) and indoor (from October to April) periods. The largest number of SCC was established in the smallest farms, i.e., in farms under 50 cows and 51-100 cows (respectively 264±9,19 and 300±10,24 thousand/ml). Reliable link between the smallest and largest dairy farms and farms with 101-200 and 201-400 cows and count of somatic cells in milk has not been established (P > 0.05). Bacteria count had a low tendency to decrease when the number of cows in farms increased. The highest bacteria number was determined in the farms with 51-100 cows and the the lowest bacteria count was in milk when 201-400 and more than 401 cows were kept. With increasing the number of cows milk maximal freezing temperature decreases (significant negative trend), i. e, indicator is improving. It should be noted that in all farms milk freezing point never exceeded requirements (-0.515 °C). The highest difference between SCC in milk during the indoor and outdoor periods was established in farms with 201-400 cows (respectively 218.49 thousand/ml and 268.84 thousand/ml). However, the count of SC was significantly higher (P < 0.05) during outdoor period in large farms (201-400 and more cows). There was no significant difference between bacteria count in milk during both – outdoor and indoor – periods (P > 0.05).Keywords: bacteria, cow, farm size, somatic cell count
Procedia PDF Downloads 2673482 A Cost-Benefit Analysis of Routinely Performed Transthoracic Echocardiography in the Setting of Acute Ischemic Stroke
Authors: John Rothrock
Abstract:
Background: The role of transthoracic echocardiography (TTE) in the diagnosis and management of patients with acute ischemic stroke remains controversial. While many stroke subspecialist reserve TTE for selected patients, others consider the procedure obligatory for most or all acute stroke patients. This study was undertaken to assess the cost vs. benefit of 'routine' TTE. Methods: We examined a consecutive series of patients who were admitted to a single institution in 2019 for acute ischemic stroke and underwent TTE. We sought to determine the frequency with which the results of TTE led to a new diagnosis of cardioembolism, redirected therapeutic cerebrovascular management, and at least potentially influenced the short or long-term clinical outcome. We recorded the direct cost associated with TTE. Results: There were 1076 patients in the study group, all of whom underwent TTE. TTE identified an unsuspected source of possible/probable cardioembolism in 62 patients (6%), confirmed an initially suspected source (primarily endocarditis) in an additional 13 (1%) and produced findings that stimulated subsequent testing diagnostic of possible/probable cardioembolism in 7 patients ( < 1%). TTE results potentially influenced the clinical outcome in a total of 48 patients (4%). With a total direct cost of $1.51 million, the mean cost per case wherein TTE results potentially influenced the clinical outcome in a positive manner was $31,375. Diagnostically and therapeutically, TTE was most beneficial in 67 patients under the age of 55 who presented with 'cryptogenic' stroke, identifying patent foramen ovale in 21 (31%); closure was performed in 19. Conclusions: The utility of TTE in the setting of acute ischemic stroke is modest, with its yield greatest in younger patients with cryptogenic stroke. Given the greater sensitivity of transesophageal echocardiography in detecting PFO and evaluating the aortic arch, TTE’s role in stroke diagnosis would appear to be limited.Keywords: cardioembolic, cost-benefit, stroke, TTE
Procedia PDF Downloads 1263481 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load
Authors: David Koren, Vojko Kilar
Abstract:
The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab. The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.Keywords: extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction
Procedia PDF Downloads 3013480 The Reasons and the Practical Benefits Behind the Motivation of Businesses to Participate in the Dual Education System (DLS)
Authors: Ainur Bulasheva
Abstract:
During the last decade, the dual learning system (DLS) has been actively introduced in various industries in Kazakhstan, including both vocational, post-secondary, and higher education levels. It is a relatively new practice-oriented approach to training qualified personnel in Kazakhstan, officially introduced in 2012. Dual learning was integrated from the German vocational education and training system, combining practical training with part-time work in production and training in an educational institution. The policy of DLS has increasingly focused on decreasing youth unemployment and the shortage of mid-level professionals by providing incentives for employers to involve in this system. By participating directly in the educational process, the enterprise strives to train its future personnel to meet fast-changing market demands. This study examines the effectiveness of DLS from the perspective of employers to understand the motivations of businesses to participate (invest) in this program. The human capital theory of Backer, which predicts that employers will invest in training their workers (in our case, dual students) when they expect that the return on investment will be greater than the cost - acts as a starting point. Further extensionists of this theory will be considered to understand investing intentions of businesses. By comparing perceptions of DLS employers and non-dual practices, this study determines the efficiency of promoted training approach for enterprises in the Kazakhstan agri-food industry.Keywords: vocational and technical education, dualeducation, human capital theory, argi-food industry
Procedia PDF Downloads 693479 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites
Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal
Abstract:
Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures
Procedia PDF Downloads 2743478 Predicting the Next Offensive Play Types will be Implemented to Maximize the Defense’s Chances of Success in the National Football League
Authors: Chris Schoborg, Morgan C. Wang
Abstract:
In the realm of the National Football League (NFL), substantial dedication of time and effort is invested by both players and coaches in meticulously analyzing the game footage of their opponents. The primary aim is to anticipate the actions of the opposing team. Defensive players and coaches are especially focused on deciphering their adversaries' intentions to effectively counter their strategies. Acquiring insights into the specific play type and its intended direction on the field would confer a significant competitive advantage. This study establishes pre-snap information as the cornerstone for predicting both the play type (e.g., deep pass, short pass, or run) and its spatial trajectory (right, left, or center). The dataset for this research spans the regular NFL season data for all 32 teams from 2013 to 2022. This dataset is acquired using the nflreadr package, which conveniently extracts play-by-play data from NFL games and imports it into the R environment as structured datasets. In this study, we employ a recently developed machine learning algorithm, XGBoost. The final predictive model achieves an impressive lift of 2.61. This signifies that the presented model is 2.61 times more effective than random guessing—a significant improvement. Such a model has the potential to markedly enhance defensive coaches' ability to formulate game plans and adequately prepare their players, thus mitigating the opposing offense's yardage and point gains.Keywords: lift, NFL, sports analytics, XGBoost
Procedia PDF Downloads 563477 Estimation of Particle Size Distribution Using Magnetization Data
Authors: Navneet Kaur, S. D. Tiwari
Abstract:
Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism
Procedia PDF Downloads 1433476 Impact of Emotional Intelligence of Principals in High Schools on Teachers Conflict Management: A Case Study on Secondary Schools, Tehran, Iran
Authors: Amir Ahmadi, Hossein Ahmadi, Alireza Ahmadi
Abstract:
Emotional Intelligence (EI) has been defined as the ability to empathize, persevere, control impulses, communicate clearly, make thoughtful decisions, solve problems, and work with others in a way that earns friends and success. These abilities allow an individual to recognize and regulate emotion, develop self-control, set goals, develop empathy, resolve conflicts, and develop skills needed for leadership and effective group participation. Due to the increasing complexity of organizations and different ways of thinking, attitudes and beliefs of individuals, Conflict as an important part of organizational life has been examined frequently. The main point is that the conflict is not necessarily in organization, unnecessary; But it can be more creative (increase creativity), to promote innovation, or may avoid wasting energy and resources of the organization. The purpose of this study was to investigate the relation between principals emotional intelligence as one of the factors affecting conflict management among teachers. This relation was analyzed through cluster sampling with a sample size consisting of 120 individuals. The results of the study showed that, at the 95% level of confidence, the two secondary hypotheses (i.e. relation between emotional intelligence of principals and use of competition and cooperation strategies of conflict management among teachers)were confirmed, but the other three secondary hypotheses (i.e. the relation between emotional intelligence of managers and use of avoidance, adaptation and adaptability strategies of conflict management among teachers) were rejected. The primary hypothesis (i.e. relation between emotional intelligence of principals with conflict management among teachers) is supported.Keywords: emotional intelligence, conflict, conflict management, strategies of conflict management
Procedia PDF Downloads 356