Search results for: total quality management
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24307

Search results for: total quality management

3937 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process

Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa

Abstract:

Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.

Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas

Procedia PDF Downloads 95
3936 Handy EKG: Low-Cost ECG For Primary Care Screening In Developing Countries

Authors: Jhiamluka Zservando Solano Velasquez, Raul Palma, Alejandro Calderon, Servio Paguada, Erick Marin, Kellyn Funes, Hana Sandoval, Oscar Hernandez

Abstract:

Background: Screening cardiac conditions in primary care in developing countries can be challenging, and Honduras is not the exception. One of the main limitations is the underfunding of the Healthcare System in general, causing conventional ECG acquisition to become a secondary priority. Objective: Development of a low-cost ECG to improve screening of arrhythmias in primary care and communication with a specialist in secondary and tertiary care. Methods: Design a portable, pocket-size low-cost 3 lead ECG (Handy EKG). The device is autonomous and has Wi-Fi/Bluetooth connectivity options. A mobile app was designed which can access online servers with machine learning, a subset of artificial intelligence to learn from the data and aid clinicians in their interpretation of readings. Additionally, the device would use the online servers to transfer patient’s data and readings to a specialist in secondary and tertiary care. 50 randomized patients volunteer to participate to test the device. The patients had no previous cardiac-related conditions, and readings were taken. One reading was performed with the conventional ECG and 3 readings with the Handy EKG using different lead positions. This project was possible thanks to the funding provided by the National Autonomous University of Honduras. Results: Preliminary results show that the Handy EKG performs readings of the cardiac activity similar to those of a conventional electrocardiograph in lead I, II, and III depending on the position of the leads at a lower cost. The wave and segment duration, amplitude, and morphology of the readings were similar to the conventional ECG, and interpretation was possible to conclude whether there was an arrhythmia or not. Two cases of prolonged PR segment were found in both ECG device readings. Conclusion: Using a Frugal innovation approach can allow lower income countries to develop innovative medical devices such as the Handy EKG to fulfill unmet needs at lower prices without compromising effectiveness, safety, and quality. The Handy EKG provides a solution for primary care screening at a much lower cost and allows for convenient storage of the readings in online servers where clinical data of patients can then be accessed remotely by Cardiology specialists.

Keywords: low-cost hardware, portable electrocardiograph, prototype, remote healthcare

Procedia PDF Downloads 182
3935 Designing Presentational Writing Assessments for the Advanced Placement World Language and Culture Exams

Authors: Mette Pedersen

Abstract:

This paper outlines the criteria that assessment specialists use when they design the 'Persuasive Essay' task for the four Advanced Placement World Language and Culture Exams (AP French, German, Italian, and Spanish). The 'Persuasive Essay' is a free-response, source-based, standardized measure of presentational writing. Each 'Persuasive Essay' item consists of three sources (an article, a chart, and an audio) and a prompt, which is a statement of the topic phrased as an interrogative sentence. Due to its richness of source materials and due to the amount of time that test takers are given to prepare for and write their responses (a total of 55 minutes), the 'Persuasive Essay' is the free-response task on the AP World Language and Culture Exams that goes to the greatest lengths to unleash the test takers' proficiency potential. The author focuses on the work that goes into designing the 'Persuasive Essay' task, outlining best practices for the selection of topics and sources, the interplay that needs to be present among the sources and the thinking behind the articulation of prompts for the 'Persuasive Essay' task. Using released 'Persuasive Essay' items from the AP World Language and Culture Exams and accompanying data on test taker performance, the author shows how different passages, and features of passages, have succeeded (and sometimes not succeeded) in eliciting writing proficiency among test takers over time. Data from approximately 215.000 test takers per year from 2014 to 2017 and approximately 35.000 test takers per year from 2012 to 2013 form the basis of this analysis. The conclusion of the study is that test taker performance improves significantly when the sources that test takers are presented with express directly opposing viewpoints. Test taker performance also improves when the interrogative prompt that the test takers respond to is phrased as a yes/no question. Finally, an analysis of linguistic difficulty and complexity levels of the printed sources reveals that test taker performance does not decrease when the complexity level of the article of the 'Persuasive Essay' increases. This last text complexity analysis is performed with the help of the 'ETS TextEvaluator' tool and the 'Complexity Scale for Information Texts (Scale)', two tools, which, in combination, provide a rubric and a fully-automated technology for evaluating nonfiction and informational texts in English translation.

Keywords: advanced placement world language and culture exams, designing presentational writing assessments, large-scale standardized assessments of written language proficiency, source-based language testing

Procedia PDF Downloads 146
3934 Analysis of the Impact of Climate Change on Maize (Zea Mays) Yield in Central Ethiopia

Authors: Takele Nemomsa, Girma Mamo, Tesfaye Balemi

Abstract:

Climate change refers to a change in the state of the climate that can be identified (e.g. using statistical tests) by changes in the mean and/or variance of its properties and that persists for an extended period, typically decades or longer. In Ethiopia; Maize production in relation to climate change at regional and sub- regional scales have not been studied in detail. Thus, this study was aimed to analyse the impact of climate change on maize yield in Ambo Districts, Central Ethiopia. To this effect, weather data, soil data and maize experimental data for Arganne hybrid were used. APSIM software was used to investigate the response of maize (Zea mays) yield to different agronomic management practices using current and future (2020s–2080s) climate data. The climate change projections data which were downscaled using SDSM were used as input of climate data for the impact analysis. Compared to agronomic practices the impact of climate change on Arganne in Central Ethiopia is minute. However, within 2020s-2080s in Ambo area; the yield of Arganne hybrid is projected to reduce by 1.06% to 2.02%, and in 2050s it is projected to reduce by 1.56 While in 2080s; it is projected to increase by 1.03% to 2.07%. Thus, to adapt to the changing climate; farmers should consider increasing plant density and fertilizer rate per hectare.

Keywords: APSIM, downscaling, response, SDSM

Procedia PDF Downloads 388
3933 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 92
3932 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique

Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck

Abstract:

In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.

Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking

Procedia PDF Downloads 88
3931 Explanation Conceptual Model of the Architectural Form Effect on Structures in Building Aesthetics

Authors: Fatemeh Nejati, Farah Habib, Sayeh Goudarzi

Abstract:

Architecture and structure have always been closely interrelated so that they should be integrated into a unified, coherent and beautiful universe, while in the contemporary era, both structures and architecture proceed separately. The purpose of architecture is the art of creating form and space and order for human service, and the goal of the structural engineer is the transfer of loads to the structure, too. This research seeks to achieve the goal by looking at the relationship between the form of architecture and structure from its inception to the present day to the Global Identification and Management Plan. Finally, by identifying the main components of the design of the structure in interaction with the architectural form, an effective step is conducted in the Professional training direction and solutions to professionals. Therefore, after reviewing the evolution of structural and architectural coordination in various historical periods as well as how to reach the form of the structure in different times and places, components are required to test the components and present the final theory that one hundred to be tested in this regard. Finally, this research indicates the fact that the form of architecture and structure has an aesthetic link, which is influenced by a number of components that could be edited and has a regular order throughout history that could be regular. The research methodology is analytic, and it is comparative using analytical and matrix diagrams and diagrams and tools for conducting library research and interviewing.

Keywords: architecture, structural form, structural and architectural coordination, effective components, aesthetics

Procedia PDF Downloads 219
3930 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model

Authors: Sidrah Ahmed

Abstract:

The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.

Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients

Procedia PDF Downloads 208
3929 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum

Authors: K. Durairaj, I. N. Umar

Abstract:

The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating that the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in a different groups aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.

Keywords: asynchronous discussion forums, content analysis, knowledge construction, social network analysis

Procedia PDF Downloads 378
3928 The Role of Autophagy Modulation in Angiotensin-II Induced Hypertrophy

Authors: Kitti Szoke, Laszlo Szoke, Attila Czompa, Arpad Tosaki, Istvan Lekli

Abstract:

Autophagy plays an important role in cardiac hypertrophy, which is one of the most common causes of heart failure in the world. This self-degradative catabolic process, responsible for protein quality control, balancing sources of energy at critical times, and elimination of damaged organelles. The autophagic activity can be triggered by starvation, oxidative stress, or pharmacological agents, like rapamycin. This induced autophagy can promote cell survival during starvation or pathological stress. In this study, it is investigated the effect of the induced autophagic process on angiotensin induced hypertrophic H9c2 cells. In our study, it is used H9c2 cells as an in vitro model. To induce hypertrophy, cells were treated with 10000 nM angiotensin-II, and to activate autophagy, 100 nM rapamycin treatment was used. The following groups were formed: 1: control, 2: 10000 nM AT-II, 3: 100 nM rapamycin, 4: 100 nM rapamycin pretreatment then 10000 nM AT-II. The cell viability was examined via MTT (cell proliferation assay) assay. The cells were stained with rhodamine-conjugated phalloidin and DAPI to visualize F-actin filaments and cell nuclei then the cell size alteration was examined in a fluorescence microscope. Furthermore, the expression levels of autophagic and apoptotic proteins such as Beclin-1, p62, LC3B-II, Cleaved Caspase-3 were evaluated by Western blot. MTT assay result suggests that the used pharmaceutical agents in the tested concentrations did not have a toxic effect; however, at group 3, a slight decrement was detected in cell viability. In response to AT-II treatment, a significant increase was detected in the cell size; cells became hypertrophic. However, rapamycin pretreatment slightly reduced the cell size compared to group 2. Western blot results showed that AT-II treatment-induced autophagy, because the increased expression of Beclin-1, p62, LC3B-II were observed. However, due to the incomplete autophagy, the apoptotic Cleaved Caspase-3 expression also increased. Rapamycin pretreatment up-regulated Beclin-1 and LC3B-II, down-regulated p62 and Cleaved Caspase-3, indicating that rapamycin-induced autophagy can restore the normal autophagic flux. Taken together, our results suggest that rapamycin activated autophagy reduces angiotensin-II induced hypertrophy.

Keywords: angiotensin-II, autophagy, H9c2 cell line, hypertrophy, rapamycin

Procedia PDF Downloads 150
3927 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering

Authors: R. Nandhini, Gaurab Mudbhari

Abstract:

Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.

Keywords: machine learning, deep learning, image classification, image clustering

Procedia PDF Downloads 21
3926 Functional Dimension of Reuse: Use of Antalya Kaleiçi Traditional Dwellings as Hotel

Authors: Dicle Aydın, Süheyla Büyükşahin Sıramkaya

Abstract:

Conservation concept gained importance especially in 19th century, it found value with the change and developments lived globally. Basic values in the essence of the concept are important in the continuity of historical and cultural fabrics which have character special to them. Reuse of settlements and spaces carrying historical and cultural values in the frame of socio-cultural and socio-economic conditions is related with functional value. Functional dimension of reuse signifies interrogation of the usage potential of the building with a different aim other than its determined aim. If a building carrying historical and cultural values cannot be used with its own function because of environmental, economical, structural and functional reasons, it is advantageous to maintain its reuse from the point of environmental ecology. By giving a new function both a requirement of the society is fulfilled and a culture entity is conserved because of its functional value. In this study, functional dimension of reuse is exemplified in Antalya Kaleiçi where has a special location and importance with its natural, cultural and historical heritage characteristics. Antayla Kaleiçi settlement preserves its liveliness as a touristic urban fabric with its almost fifty thousand years of past, traditional urban form, civil architectural examples of 18th–19th century reflecting the life style of the region and monumental buildings. The civil architectural examples in the fabric have a special character formed according to Mediterranean climate with their outer sofa (open or closed), one, two or three storey, courtyards and oriels. In the study reuse of five civil architectural examples as boutique hotel by forming a whole with their environmental arrangements is investigated, it is analyzed how the spatial requirements of a boutique hotel are fulfilled in traditional dwellings. Usage of a cultural entity as a boutique hotel is evaluated under the headlines of i.functional requirement, ii.satisfactoriness of spatial dimensions, iii.functional organization. There are closed and open restaurant, kitchen, pub, lobby, administrative offices in the hotel with 70 bed capacity and 28 rooms in total. There are expansions to urban areas on second and third floors by the means of oriels in the hotel surrounded by narrow streets in three directions. This boutique hotel, formed by unique five different dwellings having similar plan scheme in traditional fabric, is different with its structure opened to outside and connected to each other by the means of courtyards, and its outside spaces which gained mobility because of the elevation differences in courtyards.

Keywords: reuse, adaptive reuse, functional dimension of reuse, traditional dwellings

Procedia PDF Downloads 322
3925 Advancements in AI Training and Education for a Future-Ready Healthcare System

Authors: Shamie Kumar

Abstract:

Background: Radiologists and radiographers (RR) need to educate themselves and their colleagues to ensure that AI is integrated safely, useful, and in a meaningful way with the direction it always benefits the patients. AI education and training are fundamental to the way RR work and interact with it, such that they feel confident using it as part of their clinical practice in a way they understand it. Methodology: This exploratory research will outline the current educational and training gaps for radiographers and radiologists in AI radiology diagnostics. It will review the status, skills, challenges of educating and teaching. Understanding the use of artificial intelligence within daily clinical practice, why it is fundamental, and justification on why learning about AI is essential for wider adoption. Results: The current knowledge among RR is very sparse, country dependent, and with radiologists being the majority of the end-users for AI, their targeted training and learning AI opportunities surpass the ones available to radiographers. There are many papers that suggest there is a lack of knowledge, understanding, and training of AI in radiology amongst RR, and because of this, they are unable to comprehend exactly how AI works, integrates, benefits of using it, and its limitations. There is an indication they wish to receive specific training; however, both professions need to actively engage in learning about it and develop the skills that enable them to effectively use it. There is expected variability amongst the profession on their degree of commitment to AI as most don’t understand its value; this only adds to the need to train and educate RR. Currently, there is little AI teaching in either undergraduate or postgraduate study programs, and it is not readily available. In addition to this, there are other training programs, courses, workshops, and seminars available; most of these are short and one session rather than a continuation of learning which cover a basic understanding of AI and peripheral topics such as ethics, legal, and potential of AI. There appears to be an obvious gap between the content of what the training program offers and what the RR needs and wants to learn. Due to this, there is a risk of ineffective learning outcomes and attendees feeling a lack of clarity and depth of understanding of the practicality of using AI in a clinical environment. Conclusion: Education, training, and courses need to have defined learning outcomes with relevant concepts, ensuring theory and practice are taught as a continuation of the learning process based on use cases specific to a clinical working environment. Undergraduate and postgraduate courses should be developed robustly, ensuring the delivery of it is with expertise within that field; in addition, training and other programs should be delivered as a way of continued professional development and aligned with accredited institutions for a degree of quality assurance.

Keywords: artificial intelligence, training, radiology, education, learning

Procedia PDF Downloads 92
3924 Spatio-Temporal Data Mining with Association Rules for Lake Van

Authors: Tolga Aydin, M. Fatih Alaeddinoğlu

Abstract:

People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.

Keywords: apriori algorithm, association rules, data mining, spatio-temporal data

Procedia PDF Downloads 377
3923 Study of Some Biological Profiles as Limiting Factors of Male Fertility in the Region of Batna, Algeria

Authors: Bousnane Nour El Houda, Chennaf Ali, Yahia Mouloud, Benbia Souhila

Abstract:

Male infertility or the inability of a man to procreate is a major public health problem, where it is a leading cause of marital discord in several countries such Algeria. The objective of this work is to study some biological profiles of infertile men from the city of Batna/Algeria and to identify the causes of infertility in a population of infertile males to improve its management and to establish a good therapeutic strategy through a study that lasted 10 months in the Department of Urology of the University Hospital of Banta and on a population of 140 infertile subjects. For every man, series of assessments was performed to determine the exact causes of infertility. We found 102 cases of primary infertility against 38 cases of secondary infertility; the average age of men was 39.7 years, with a predominance of the age group (46-50 years). 34.29% of subjects had genital infections against 17.14% with varicocele. 132 men presented spermiologiques abnormalities; a asthénospermie (AS) in 27.27% of the cases, astheno-terato spermiea (OATS) 11.36% while Azoospermes showed 5.07%. Genital infections are the main causes of infertility (34.29%) of the cases. The results of spermocytogramme showed a predominance of head abnormalities (41.70%), while the flagellum abnormalities presented 33.83%. The dosage of the seminal plasma carnitine showed no pathological cases, which makes it difficult to know their association with infertility. By against some disturbances Fructose and Zinc have been reported.

Keywords: male infertility, spermogramme, spermocytogramme, biological profils

Procedia PDF Downloads 337
3922 Faculty Use of Geospatial Tools for Deep Learning in Science and Engineering Courses

Authors: Laura Rodriguez Amaya

Abstract:

Advances in science, technology, engineering, and mathematics (STEM) are viewed as important to countries’ national economies and their capacities to be competitive in the global economy. However, many countries experience low numbers of students entering these disciplines. To strengthen the professional STEM pipelines, it is important that students are retained in these disciplines at universities. Scholars agree that to retain students in universities’ STEM degrees, it is necessary that STEM course content shows the relevance of these academic fields to their daily lives. By increasing students’ understanding on the importance of these degrees and careers, students’ motivation to remain in these academic programs can also increase. An effective way to make STEM content relevant to students’ lives is the use of geospatial technologies and geovisualization in the classroom. The Geospatial Revolution, and the science and technology associated with it, has provided scientists and engineers with an incredible amount of data about Earth and Earth systems. This data can be used in the classroom to support instruction and make content relevant to all students. The purpose of this study was to find out the prevalence use of geospatial technologies and geovisualization as teaching practices in a USA university. The Teaching Practices Inventory survey, which is a modified version of the Carl Wieman Science Education Initiative Teaching Practices Inventory, was selected for the study. Faculty in the STEM disciplines that participated in a summer learning institute at a 4-year university in the USA constituted the population selected for the study. One of the summer learning institute’s main purpose was to have an impact on the teaching of STEM courses, particularly the teaching of gateway courses taken by many STEM majors. The sample population for the study is 97.5 of the total number of summer learning institute participants. Basic descriptive statistics through the Statistical Package for the Social Sciences (SPSS) were performed to find out: 1) The percentage of faculty using geospatial technologies and geovisualization; 2) Did the faculty associated department impact their use of geospatial tools?; and 3) Did the number of years in a teaching capacity impact their use of geospatial tools? Findings indicate that only 10 percent of respondents had used geospatial technologies, and 18 percent had used geospatial visualization. In addition, the use of geovisualization among faculty of different disciplines was broader than the use of geospatial technologies. The use of geospatial technologies concentrated in the engineering departments. Data seems to indicate the lack of incorporation of geospatial tools in STEM education. The use of geospatial tools is an effective way to engage students in deep STEM learning. Future research should look at the effect on student learning and retention in science and engineering programs when geospatial tools are used.

Keywords: engineering education, geospatial technology, geovisualization, STEM

Procedia PDF Downloads 255
3921 Monitoring Public Transportation in Developing Countries Using Automatic Vehicle Location System: A Case Study

Authors: Ahmed Osama, Hassan A. Mahdy, Khalid A. Kandil, Mohamed Elhabiby

Abstract:

Automatic Vehicle Location systems (AVL) have been used worldwide for more than twenty years and have showed great success in public transportation management and monitoring. Cairo public bus service suffers from several problems such as unscheduled stops, unscheduled route deviations, and inaccurate schedules, which have negative impacts on service reliability. This research aims to study those problems for a selected bus route in Cairo using a prototype AVL system. Experimental trips were run on the selected route; and the locations of unscheduled stops, regions of unscheduled deviations, along with other trip time and speed data were collected. Data was analyzed to demonstrate the reliability of passengers on the unscheduled stops compared to the scheduled ones. Trip time was also modeled to assess the unscheduled stops’ impact on trip time, and to check the accuracy of the applied scheduled trip time. Moreover, frequency and length of the unscheduled route deviations, as well as their impact on the bus stops, were illustrated. Solutions were proposed for the bus service deficiencies using the AVL system. Finally, recommendations were proposed for further research.

Keywords: automatic vehicle location, public transportation, unscheduled stops, unscheduled route deviations, inaccurate schedule

Procedia PDF Downloads 394
3920 Rating the Importance of Customer Requirements for Green Product Using Analytic Hierarchy Process Methodology

Authors: Lara F. Horani, Shurong Tong

Abstract:

Identification of customer requirements and their preferences are the starting points in the process of product design. Most of design methodologies focus on traditional requirements. But in the previous decade, the green products and the environment requirements have increasingly attracted the attention with the constant increase in the level of consumer awareness towards environmental problems (such as green-house effect, global warming, pollution and energy crisis, and waste management). Determining the importance weights for the customer requirements is an essential and crucial process. This paper used the analytic hierarchy process (AHP) approach to evaluate and rate the customer requirements for green products. With respect to the ultimate goal of customer satisfaction, surveys are conducted using a five-point scale analysis. With the help of this scale, one can derive the weight vectors. This approach can improve the imprecise ranking of customer requirements inherited from studies based on the conventional AHP. Furthermore, the AHP with extent analysis is simple and easy to implement to prioritize customer requirements. The research is based on collected data through a questionnaire survey conducted over a sample of 160 people belonging to different age, marital status, education and income groups in order to identify the customer preferences for green product requirements.

Keywords: analytic hierarchy process (AHP), green product, customer requirements for green design, importance weights for the customer requirements

Procedia PDF Downloads 247
3919 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia

Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa

Abstract:

Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.

Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement

Procedia PDF Downloads 44
3918 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 270
3917 The Non-Motor Symptoms of Filipino Patients with Parkinson’s Disease

Authors: Cherrie Mae S. Sia, Noel J. Belonguel, Jarungchai Anton S. Vatanagul

Abstract:

Background: Parkinson’s disease (PD) is a chronic progressive, neurodegenerative disorder known for its motor symptoms such as bradykinesia, resting tremor, muscle rigidity, and postural instability. Patients with PD also experience non-motor symptoms (NMS) such as depression, fatigue, and sleep disturbances that are most of the time unrecognized by clinicians. This may be due to the lack of spontaneous reports from the patients or partly because of the lack of systematic questioning from the healthcare professional. There is limited data with regards to these NMS especially that of Filipino patients with PD. Objectives: This study aims to determine the non-motor symptoms of Filipino patients with Parkinson’s disease. Materials and Methods: This is a prospective, cohort study involving thirty-four patients of Filipino-descent diagnosed with PD in three out-patient clinics in Cebu City from April to September 2014. Each patient was interviewed using the Non-Motor Symptom Scale (NMSS). A Cebuano version of the NMSS was also provided for the non-English speaking patients. Interview time was approximately ten to fifteen minutes for each respondent. Results: Of the thirty-four patients with Parkinson’s disease, majority was noted to be males (N=19) and the disease was noted to be more prevalent in patients with a mean age of 62 (SD±9) years old. Hypertension (59%) and diabetes mellitus (29%) were the common co-morbidities in the study population. All patients presented more than one NMS, with insomnia (41.2%), poor memory (23.5%) and depression (14.7%) being the first non-motor symptoms to occur. Symptoms involving mood/cognition (mean=2.21), and attention/memory (mean=2.05) were noted to be the most frequent and of moderate severity. Based on the NMSS, the symptoms that were noted to be mild and often to occur were those that involved the mood/cognition (score=3.84), attention/memory (score=3.50), and sleep/fatigue (score=3.00) domains. Levodopa-Carbidopa, Ropinirole, and Pramipexole were the most frequently used medications in the study population. Conclusion: Non-motor symptoms (NMS) are common in patients with Parkinson’s disease (PD). They appear at the time of diagnosis of PD or even before the motor symptoms manifest. The earliest non-motor symptoms to occur are insomnia, poor memory, and depression. Those pertaining to mood/cognition and attention/memory are the most frequent NMS and they are of moderate severity. Identifying these NMS by doing a questionnaire-guided interview such as the Non-Motor Symptom Scale (NMSS) before they can become more severe and affect the patient’s quality of life is a must for every clinician caring for a PD patient. Early treatment and control of these NMS can then be given, hence, improving the patient’s outcome and prognosis.

Keywords: non motor symptoms, Parkinson's Disease, insomnia, depression

Procedia PDF Downloads 450
3916 Internet of Things Edge Device Power Modelling and Optimization Simulator

Authors: Cian O'Shea, Ross O'Halloran, Peter Haigh

Abstract:

Wireless Sensor Networks (WSN) are Internet of Things (IoT) edge devices. They are becoming widely adopted in many industries, including health care, building energy management, and conditional monitoring. As the scale of WSN deployments increases, the cost and complexity of battery replacement and disposal become more significant and in time may become a barrier to adoption. Harvesting ambient energies provide a pathway to reducing dependence on batteries and in the future may lead to autonomously powered sensors. This work describes a simulation tool that enables the user to predict the battery life of a wireless sensor that utilizes energy harvesting to supplement the battery power. To create this simulator, all aspects of a typical WSN edge device were modelled including, sensors, transceiver, and microcontroller as well as the energy source components (batteries, solar cells, thermoelectric generators (TEG), supercapacitors and DC/DC converters). The tool allows the user to plug and play different pre characterized devices as well as add user-defined devices. The goal of this simulation tool is to predict the lifetime of a device and scope for extension using ambient energy sources.

Keywords: Wireless Sensor Network, IoT, edge device, simulation, solar cells, TEG, supercapacitor, energy harvesting

Procedia PDF Downloads 136
3915 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6

Authors: M. Moslehpour, S. Khorsandi

Abstract:

Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.

Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing

Procedia PDF Downloads 280
3914 Applying of an Adaptive Neuro-Fuzzy Inference System (ANFIS) for Estimation of Flood Hydrographs

Authors: Amir Ahmad Dehghani, Morteza Nabizadeh

Abstract:

This paper presents the application of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to flood hydrograph modeling of Shahid Rajaee reservoir dam located in Iran. This was carried out using 11 flood hydrographs recorded in Tajan river gauging station. From this dataset, 9 flood hydrographs were chosen to train the model and 2 flood hydrographs to test the model. The different architectures of neuro-fuzzy model according to the membership function and learning algorithm were designed and trained with different epochs. The results were evaluated in comparison with the observed hydrographs and the best structure of model was chosen according the least RMSE in each performance. To evaluate the efficiency of neuro-fuzzy model, various statistical indices such as Nash-Sutcliff and flood peak discharge error criteria were calculated. In this simulation, the coordinates of a flood hydrograph including peak discharge were estimated using the discharge values occurred in the earlier time steps as input values to the neuro-fuzzy model. These results indicate the satisfactory efficiency of neuro-fuzzy model for flood simulating. This performance of the model demonstrates the suitability of the implemented approach to flood management projects.

Keywords: adaptive neuro-fuzzy inference system, flood hydrograph, hybrid learning algorithm, Shahid Rajaee reservoir dam

Procedia PDF Downloads 483
3913 Analysis of the Properties of Hydrophobised Heat-Insulating Mortar with Perlite

Authors: Danuta Barnat-Hunek

Abstract:

The studies are devoted to assessing the effectiveness of hydrophobic and air entraining admixtures based on organ silicon compounds. Mortars with lightweight aggregate–perlite were the subjects of the investigation. The following laboratory tests were performed: density, open porosity, total porosity, absorptivity, capability to diffuse water vapour, compressive strength, flexural strength, frost resistance, sodium sulphate corrosion resistance and the thermal conductivity coefficient. The composition of the two mixtures of mortars was prepared: mortars without a hydrophobic admixture and mortars with cementitious waterproofing material. Surface hydrophobisation was produced on the mortars without a hydrophobic admixture using a methyl silicone resin, a water-based emulsion of methyl silicone resin in potassium hydroxide and alkyl-alkoxy-silane in organic solvents. The results of the effectiveness of hydrophobisation of mortars are the following: The highest absorption after 14 days of testing was shown by mortar without an agent (57.5%), while the lowest absorption was demonstrated by the mortar with methyl silicone resin (52.7%). After 14 days in water the hydrophobisation treatment of the samples proved to be ineffective. The hydrophobised mortars are characterized by an insignificant mass change due to freezing and thawing processes in the case of the methyl silicone resin – 1%, samples without hydrophobisation –5%. This agent efficiently protected the mortars against frost corrosion. The standard samples showed very good resistance to the pressure of sodium sulphate crystallization. Organosilicon compounds have a negative influence on the chemical resistance (weight loss about 7%). The mass loss of non-hydrophobic mortar was 2 times lower than mortar with the hydrophobic admixture. Hydrophobic and aeration admixtures significantly affect the thermal conductivity and the difference is mainly due to the difference in porosity of the compared materials. Hydrophobisation of the mortar mass slightly decreased the porosity of the mortar, and thus in an increase of 20% of its compressive strength. The admixture adversely affected the ability of the hydrophobic mortar – it achieved the opposite effect. As a result of hydrophobising the mass, the mortar samples decreased in density and had improved wettability. Poor protection of the mortar surface is probably due to the short time of saturating the sample in the preparation. The mortars were characterized by high porosity (65%) and water absorption (57.5%), so in order to achieve better efficiency, extending the time of hydrophobisation would be advisable. The highest efficiency was obtained for the surface hydrophobised with the methyl silicone resin.

Keywords: hydrophobisation, mortars, salt crystallization, frost resistance

Procedia PDF Downloads 213
3912 Effect of Self-Lubricating Carbon Materials on the Tribological Performance of Ultra-High Molecular Weight Polyethylene

Authors: Nayeli Camacho, Fernanda Lara-Perez, Carolina Ortega-Portilla, Diego G. Espinosa-Arbelaez, Juan M. Alvarado-Orozco, Guillermo C. Mondragon-Rodriguez

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) has been the gold standard material for total knee replacements for almost five decades. Wear damage to UHMWPE articulating surface is inevitable due to the natural sliding and rolling movements of the knee. This generates a considerable amount of wear debris, which results in mechanical instability of the joint, reduces joint mobility, increases pain with detrimental biologic responses, and causes component loosening. The presence of wear particles has been closely related to adverse reactions in the knee joint surrounding tissue, especially for particles in the range of 0.3 to 2 μm. Carbon-based materials possess excellent mechanical properties and have shown great promise in tribological applications. In this study, diamond-like carbon coatings (DLC) and carbon nanotubes (CNTs) were used to decrease the wear rate of ultra-high molecular weight polyethylene. A titanium doped DLC (Ti-DLC) was deposited by magnetron sputtering on stainless steel precision spheres while CNTs were used as a second phase reinforcement in UHMWPE at a concentration of 1.25 wt.%. A comparative tribological analysis of the wear of UHMWPE and UHMWPE-CNTs with a stainless steel counterpart with and without Ti-DLC coating is presented. The experimental wear testing was performed on a pin-on-disc tribometer under dry conditions, using a reciprocating movement with a load of 1 N at a frequency of 2 Hz for 100,000 and 200,000 cycles. The wear tracks were analyzed with high-resolution scanning electron microscopy to determine wear modes and observe the size and shape of the wear debris. Furthermore, profilometry was used to study the depth of the wear tracks and to map the wear of the articulating surface. The wear tracks at 100,000 and 200,000 cycles on all samples were relatively shallow, and they were in the range of average roughness. It was observed that the Ti-DLC coating decreases the mass loss in the UHMWPE and the depth of the wear track. The combination of both carbon-based materials decreased the material loss compared to the system of stainless steel and UHMWPE. Burnishing of the surface was the predominant wear mode observed with all the systems, more subtle for the systems with Ti-DLC coatings. Meanwhile, in the system composed of stainless steel-UHMWPE, the intrinsic surface roughness of the material was completely replaced by the wear tracks.

Keywords: CNT reinforcement, self-lubricating materials, Ti-DLC, UHMWPE tribological performance

Procedia PDF Downloads 114
3911 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes

Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi

Abstract:

Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.

Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes

Procedia PDF Downloads 45
3910 Bystanders' Behavior during Emergencies

Authors: Alan (Avi) Kirschenbaum, Carmit Rapaport

Abstract:

The behavior of bystanders in emergencies and disasters have been examined for over 50 years. Such acts have been cited as contributing to saving lives in terms of providing first responder help until official emergency units can arrive. Several reasons have been suggested for this type of behavior but most focused on a broad segment of individual psychological decision-making processes. Recent theoretical evidence suggests that the external factors for such bystander decisions, mainly disaster community based social contexts factors, are also important. We aim to test these competing arguments. Specifically, we examine alternative explanatory perspectives by focusing on self-efficacy as a proxy for the accepted individual psychological case and contrast it with potential bystander characteristics of the individual as well factors as embedded in the social context of the disaster community. To do so, we will utilize a random sampling of the population from a field study of an urban community in Israel that experienced five years of continuous terror attacks. The results strongly suggest that self-efficacy, as well as external factors: preparedness and having skills for intervention during emergencies along with gender best, predict potential helping behaviors. These results broaden our view of bystander behavior and open a window for enhancing this phenomenon as another element in disaster and crisis management.

Keywords: bystander behavior, disasters emergencies, psychological motivation to help, social context for helping

Procedia PDF Downloads 126
3909 Current Status of Ir-192 Brachytherapy in Bangladesh

Authors: M. Safiqul Islam, Md Arafat Hossain Sarkar

Abstract:

Brachytherapy is one of the most important cancer treatment management systems in radiotherapy department. Brachytherapy treatment is moved into High Dose Rate (HDR) after loader from Low Dose Rate (LDR) after loader due to radiation protection advantage. HDR Brachytherapy is a highly multipurpose system for enhancing cure and achieving palliation in many common cancers disease of developing countries. High-dose rate (HDR) Brachytherapy is a type of internal radiation therapy that delivers radiation from implants placed close to or inside, the tumor(s) in the body. This procedure is very effective at providing localized radiation to the tumor site while minimizing the patient’s whole body dose. Brachytherapy has proven to be a highly successful treatment for cancers of the prostate, cervix, endometrium, breast, skin, bronchus, esophagus, and head and neck, as well as soft tissue sarcomas and several other types of cancer. For the time being in our country we have 10 new HDR Remote after loading Brachytherapy. Right now 4 HDR Brachytherapy is already installed and running for patient’s treatment out of 10 HDR Brachytherapy. Ir-192 source is more comfortable than Co-60. In that case people or expert personnel prefer Ir-192 source for different kind of cancer patients. Ir-192 are economically, more flexible and familiar in our country.

Keywords: Ir-192, brachytherapy, cancer treatment, prostate, cervix, endometrium, breast, skin, bronchus, esophagus, soft tissue sarcomas

Procedia PDF Downloads 436
3908 Determinants of Psychological Distress in Teenagers and Young Adults Affected by Cancer: A Systematic Review

Authors: Anna Bak-Klimek, Emily Spencer, Siew Lee, Karen Campbell, Wendy McInally

Abstract:

Background & Significance: Over half of Teenagers and Young Adults (TYAs) say that they experience psychological distress after cancer diagnosis and TYAs with cancer are at higher risk of developing distress compared to other age groups. Despite this there are no age-appropriate interventions to help TYAs manage distress and there is a lack of conceptual understanding of what causes distress in this population group. This makes it difficult to design a targeted, developmentally appropriate intervention. This review aims to identify the key determinants of distress in TYAs affected by cancer and to propose an integrative model of cancer-related distress for TYAs. Method: A literature search was performed in Cochrane Database of Systematic Reviews, MEDLINE, PsycINFO, CINAHL, EMBASE and PsycArticles in May-June, 2022. Quantitative literature was systematically reviewed on the relationship between psychological distress experienced by TYAs affected by cancer and a wide range of factors i.e. individual (demographic, psychological, developmental, and clinical factors) and contextual (social/environmental) factors. Evidence was synthesized and correlates were categorized using the Biopsychosocial Model. The full protocol is available from PROSPERO (CRD42022322069) Results: Thirty eligible quantitative studies met criteria for the review. A total of twenty-six studies were cross-sectional, three were longitudinal and one study was a case control study. The evidence on the relationship between the socio-demographic, illness and treatment-related factors and psychological distress is inconsistent and unclear. There is however consistent evidence on the link between psychological factors and psychological distress. For instance, the use of cognitive and defence coping, negative meta-cognitive beliefs, less optimism, a lack of sense of meaning and lower resilience levels were significantly associated with higher psychological distress. Furthermore, developmental factors such as poor self-image, identity issues and perceived conflict were strongly associated with higher distress levels. Conclusions: The current review suggests that psychological and developmental factors such as ineffective coping strategies, poor self-image and identity issues may play a key role in the development of psychological distress in TYAs affected by cancer. The review proposes a Positive Developmental Psychology Model of Distress for Teenagers and Young Adults affected by cancer. The review highlights that implementation of psychological interventions that foster optimism, improve resilience and address self-image may result in reduced distress in TYA’s with cancer.

Keywords: cancer, determinant, psychological distress, teenager and young adult, theoretical model

Procedia PDF Downloads 101