Search results for: error indicators
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3450

Search results for: error indicators

120 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 74
119 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test

Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi

Abstract:

Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.

Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde

Procedia PDF Downloads 74
118 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 112
117 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 248
116 Supporting Students with Autism Spectrum Disorder: A Model of Partnership and Capacity Building in Hong Kong

Authors: Irene T. Ho

Abstract:

Students with Autism Spectrum Disorder (ASD) studying in mainstream schools often face difficulties adjusting to school life and teachers often find it challenging to meet the needs of these students. The Hong Kong Jockey Club Autism Support Network (JC A-Connect) is an initiative launched in 2015 to enhance support for students with ASD as well as their families and schools. The School Support Programme of the Project aims at building the capacity of schools to provide quality education for these students. The present report provides a summary of the main features of the support model and the related evaluation results. The school support model was conceptualized in response to four observed needs: (1) inadequate teacher expertise in dealing with the related challenges, (2) the need to promote evidence-based practices in schools, (3) less than satisfactory home-school collaboration and whole-school participation, and (4) lack of concerted effort by different parties involved in providing support to schools. The resulting model had partnership and capacity building as two guiding tenets for the School Support Programme. There were two levels of partnership promoted in the project. At the programme support level, a platform that enables effective collaboration among major stakeholders was established, including the funding body that provides the necessary resources, the Education Bureau that helps to engage schools, university experts who provide professional leadership and research support, as well as non-governmental organization (NGO) professionals who provide services to the schools. At the programme implementation level, tripartite collaboration among teachers, parents and professionals was emphasized. This notion of partnership permeated efforts at capacity building targeting students with ASD, school personnel, parents and peers. During 2015 to 2018, school-based programmes were implemented in over 400 primary and secondary schools with the following features: (1) spiral Tier 2 (group) training for students with ASD to enhance their adaptive skills, led by professionals but with strong teacher involvement to promote transfer of knowledge and skills; (2) supplementary programmes for teachers, parents and peers to enhance their capability to support students with ASD; and (3) efforts at promoting continuing or transfer of learning, on the part of both students and teachers, to Tier 1 (classroom practice) and Tier 3 (individual training) contexts. Over 5,000 students participated in the Programme, representing about 50% of students diagnosed with ASD in mainstream public sector schools in Hong Kong. Results showed that the Programme was effective in helping students improve to various extents at three levels: achievement of specific training goals, improvement in adaptive skills in school, and change in ASD symptoms. The sense of competence of teachers and parents in dealing with ASD-related issues, measured by self-report rating scales, was also significantly enhanced. Moreover, effects on enhancing the school system to provide support for students with ASD, assessed according to indicators of inclusive education, were seen. The process and results of this Programme illustrate how obstacles to inclusive education for students with ASD could be overcome by strengthening the necessary partnerships and building the required capabilities of all parties concerned.

Keywords: autism, school support, skills training, teacher development, three-tier model

Procedia PDF Downloads 76
115 Effect of Fertilization and Combined Inoculation with Azospirillum brasilense and Pseudomonas fluorescens on Rhizosphere Microbial Communities of Avena sativa (Oats) and Secale Cereale (Rye) Grown as Cover Crops

Authors: Jhovana Silvia Escobar Ortega, Ines Eugenia Garcia De Salamone

Abstract:

Cover crops are an agri-technological alternative to improve all properties of soils. Cover crops such as oats and rye could be used to reduce erosion and favor system sustainability when they are grown in the same agricultural cycle of the soybean crop. This crop is very profitable but its low contribution of easily decomposable residues, due to its low C/N ratio, leaves the soil exposed to erosive action and raises the need to reduce its monoculture. Furthermore, inoculation with the plant growth promoting rhizobacteria contributes to the implementation, development and production of several cereal crops. However, there is little information on its effects on forage crops which are often used as cover crops to improve soil quality. In order to evaluate the effect of combined inoculation with Azospirillum brasilense and Pseudomonas fluorescens on rhizosphere microbial communities, field experiments were conducted in the west of Buenos Aires province, Argentina, with a split-split plot randomized complete block factorial design with three replicates. The factors were: type of cover crop, inoculation and fertilization. In the main plot two levels of fertilization 0 and 7 40-0-5 (NPKS) were established at sowing. Rye (Secale cereale cultivar Quehué) and oats (Avena sativa var Aurora.) were sown in the subplots. In the sub-subplots two inoculation treatments are applied without and with application of a combined inoculant with A. brasilense and P. fluorescens. Due to the growth of cover crops has to be stopped usually with the herbicide glyphosate, rhizosphere soil of 0-20 and 20-40 cm layers was sampled at three sampling times which were: before glyphosate application (BG), a month after glyphosate application (AG) and at soybean harvest (SH). Community level of physiological profiles (CLPP) and Shannon index of microbial diversity (H) were obtained by multivariate analysis of Principal Components. Also, the most probable number (MPN) of nitrifiers and cellulolytics were determined using selective liquid media for each functional group. The CLPP of rhizosphere microbial communities showed significant differences between sampling times. There was not interaction between sampling times and both, types of cover crops and inoculation. Rhizosphere microbial communities of samples obtained BG had different CLPP with respect to the samples obtained in the sampling times AG and SH. Fertilizer and depth of sampling also caused changes in the CLPP. The H diversity index of rhizosphere microbial communities of rye in the sampling time BG were higher than those associated with oats. The MPN of both microbial functional types was lower in the deeper layer since these microorganisms are mostly aerobic. The MPN of nitrifiers decreased in rhizosphere of both cover crops only AG. At the sampling time BG, the NMP of both microbial types were larger than those obtained for AG and SH. This may mean that the glyphosate application could cause fairly permanent changes in these microbial communities which can be considered bio-indicators of soil quality. Inoculation and fertilizer inputs could be included to improve management of these cover crops because they can have a significant positive effect on the sustainability of the agro-ecosystem.

Keywords: community level of physiological profiles, microbial diversity, plant growth promoting rhizobacteria, rhizosphere microbial communities, soil quality, system sustainability

Procedia PDF Downloads 374
114 Strengthening Service Delivery to Improving Cervical Cancer Screening in Southwestern Nigeria: A Pilot Project

Authors: Afolabi K. Esther, Kuye Tolulope, Babafemi, L. Olayemi, Omikunle Yemisi

Abstract:

Background: Cervical cancer is a potentially preventable disease of public significance. All sexually active women are at risk of cervical cancer; however, the uptake and coverage are low in low-middle resource countries. Hence, the programme explored the feasibility of demonstrating an innovative and low-cost system approach to cervical cancer screening service delivery among reproductive-aged women in low–resource settings in Southwestern Nigeria. This was to promote the uptake and quality improvement of cervical cancer screening services. Methods: This study was an intervention project in three senatorial districts in Osun State that have primary, secondary and tertiary health facilities. The project was in three phases; Pre-intervention, Intervention, and Post-intervention. The study utilised the existing infrastructure, facilities and staff in project settings. The study population was nurse-midwives, community health workers and reproductive-aged women (30-49 years). The intervention phase entailed using innovative, culturally appropriate strategies to create awareness of cervical cancer and preventive health-seeking behaviour among women in the reproductive-aged group (30-49) years. Also, the service providers (community health workers, Nurses, and Midwives) were trained on screening methods and treatment of pre-cancerous lesions, and there was the provision of essential equipment and supplies for cervical cancer screening services at health facilities. Besides, advocacy and engagement were made with relevant stakeholders to integrate the cervical cancer screening services into related reproductive health services and greater allocation of resources. The expected results compared the pre and post-intervention using the baseline and process indicators and the effect of the intervention phase on screening coverage using a plausibility assessment design. The project lasted 12 months; visual Inspection with Acetic acid (VIA) screening for the women for six months and follow-up in 6 months for women receiving treatment. Results: The pre-intervention phase assessed baseline service delivery statistics in the previous 12 months drawn from the retrospective data collected as part of the routine monitoring and reporting systems. The uptake of cervical cancer screening services was low as the number of women screened in the previous 12 months was 156. Service personnel's competency level was fair (54%), and limited availability of essential equipment and supplies for cervical cancer screening services. At the post-intervention phase, the level of uptake had increased as the number of women screened was 1586 within six months in the study settings. This showed about a 100-%increase in the uptake of cervical cancer screening services compared with the baseline assessment. Also, the post-intervention level of competency of service delivery personnel had increased to 86.3%, which indicates quality improvement of the cervical cancer screening service delivery. Conclusion: the findings from the study have shown an effective approach to strengthening and improving cervical cancer screening service delivery in Southwestern Nigeria. Hence, the intervention promoted a positive attitude and health-seeking behaviour among the target population, significantly influencing the uptake of cervical cancer screening services.

Keywords: cervical cancer, screening, nigeria, health system strengthening

Procedia PDF Downloads 74
113 Gastro-Protective Actions of Melatonin and Murraya koenigii Leaf Extract Combination in Piroxicam Treated Male Wistar Rats

Authors: Syed Benazir Firdaus, Debosree Ghosh, Aindrila Chattyopadhyay, Kuladip Jana, Debasish Bandyopadhyay

Abstract:

Gastro-toxic effect of piroxicam, a classical non-steroidal anti-inflammatory drug (NSAID), has restricted its use in arthritis and similar diseases. The present study aims to find if a combination of melatonin and Murraya koenigii leaf extract therapy can protect against piroxicam induced ulcerative damage in rats. For this study, rats were divided into four groups namely control group where rats were orally administered distilled water, only combination treated group, piroxicam treated group and combination pre-administered piroxicam treated group. Each group of rats consisted of six animals. Melatonin at a dose of 20mg/kg body weight and antioxidant rich Murraya koenigii leaf extract at a dose of 50 mg /kg body weight were successively administered at 30 minutes interval one hour before oral administration of piroxicam at a dose of 30 mg/kg body weight to Wistar rats in the combination pre-administered piroxicam treated group. The rats of the animal group which was only combination treated were administered both the drugs respectively without piroxicam treatment whereas the piroxicam treated animal group was administered only piroxicam at 30mg/kg body weight without any pre-treatment with the combination. Macroscopic examination along with histo-pathological study of gastric tissue using haemotoxylin-eosin staining and alcian blue dye staining showed protection of the gastric mucosa in the combination pre-administered piroxicam treated group. Determination of adherent mucus content biochemically and collagen content through Image J analysis of picro-sirius stained sections of rat gastric tissue also revealed protective effects of the combination in piroxicam mediated toxicity. Gelatinolytic activity of piroxicam was significantly reduced by pre-administration of the drugs which was well exhibited by the gelatin zymography study of the rat gastric tissue. Mean ulcer index determined from macroscopic study of rat stomach reduced to a minimum (0±0.00; Mean ± Standard error of mean and number of animals in the group=6) indicating the absence of ulcer spots on pre-treatment of rats with the combination. Gastro-friendly prostaglandin (PGE2) which otherwise gets depleted on piroxicam treatment was also well protected when the combination was pre-administered in the rats prior to piroxicam treatment. The requirement of the individual drugs in low doses in this combinatorial therapeutic approach will possibly minimize the cost of therapy as well as it will eliminate the possibility of any pro-oxidant side effects on the use of high doses of antioxidants. Beneficial activity of this combination therapy in the rat model raises the possibility that similar protective actions might be also observed if it is adopted by patients consuming NSAIDs like piroxicam. However, the introduction of any such therapeutic approach is subject to future studies in human.

Keywords: gastro-protective action, melatonin, Murraya koenigii leaf extract, piroxicam

Procedia PDF Downloads 283
112 An Exploration of the Emergency Staff’s Perceptions and Experiences of Teamwork and the Skills Required in the Emergency Department in Saudi Arabia

Authors: Sami Alanazi

Abstract:

Teamwork practices have been recognized as a significant strategy to improve patient safety, quality of care, and staff and patient satisfaction in healthcare settings, particularly within the emergency department (ED). The EDs depend heavily on teams of interdisciplinary healthcare staff to carry out their operational goals and core business of providing care to the serious illness and injured. The ED is also recognized as a high-risk area in relation to service demand and the potential for human error. Few studies have considered the perceptions and experiences of the ED staff (physicians, nurses, allied health professionals, and administration staff) about the practice of teamwork, especially in Saudi Arabia (SA), and no studies have been conducted to explore the practices of teamwork in the EDs. Aim: To explore the practices of teamwork from the perspectives and experiences of staff (physicians, nurses, allied health professionals, and administration staff) when interacting with each other in the admission areas in the ED of a public hospital in the Northern Border region of SA. Method: A qualitative case study design was utilized, drawing on two methods for the data collection, comprising of semi-structured interviews (n=22) with physicians (6), nurses (10), allied health professionals (3), and administrative members (3) working in the ED of a hospital in the Northern Border region of SA. The second method is non-participant direct observation. All data were analyzed using thematic analysis. Findings: The main themes that emerged from the analysis were as follows: the meaningful of teamwork, reasons of teamwork, the ED environmental factors, the organizational factors, the value of communication, leadership, teamwork skills in the ED, team members' behaviors, multicultural teamwork, and patients and families behaviors theme. Discussion: Working in the ED environment played a major role in affecting work performance as well as team dynamics. However, Communication, time management, fast-paced performance, multitasking, motivation, leadership, and stress management were highlighted by the participants as fundamental skills that have a major impact on team members and patients in the ED. It was found that the behaviors of the team members impacted the team dynamics as well as ED health services. Behaviors such as disputes among team members, conflict, cooperation, uncooperative members, neglect, and emotions of the members. Besides that, the behaviors of the patients and their accompanies had a direct impact on the team and the quality of the services. In addition, the differences in the cultures have separated the team members and created undesirable gaps such the gender segregation, national origin discrimination, and similarity and different in interests. Conclusion: Effective teamwork, in the context of the emergency department, was recognized as an essential element to obtain the quality of care as well as improve staff satisfaction.

Keywords: teamwork, barrier, facilitator, emergencydepartment

Procedia PDF Downloads 104
111 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 258
110 Balloon Analogue Risk Task (BART) Performance Indicators Help Predict Outcomes of Matched Savings Program

Authors: Carlos M. Parra, Matthew Sutherland, Ranjita Poudel

Abstract:

Reduced mental-bandwidth related to low socioeconomic status (low-SES) might lead to impulsivity and risk-taking behavior, which poses as a major hurdle towards asset building (savings) behavior. Understanding the relationship between risk-related personality metrics as well as laboratory risk behavior and real-life savings behavior can help facilitate the development of effective asset building programs, which are vital for mitigating financial vulnerability and income inequality. As such, this study explored the relationship between personality metrics, laboratory behavior in a risky decision-making task and real-life asset building (savings) behaviors among individuals with low-SES from Miami, Florida (FL). Study participants (12 male, 15 female) included racially and ethnically diverse adults (mean age 41.22 ± 12.65 years), with incomplete higher education (18% had High School Diploma, 30% Associates, and 52% Some College), and low annual income (mean $13,872 ± $8020.43). Participants completed eight self-report surveys and played a widely used risky decision-making paradigm called the Balloon Analogue Risk Task (BART). Specifically, participants played three runs of BART (20 trials in each run; total 60 trials). In addition, asset building behavior data was collected for 24 participants who opened and used savings accounts and completed a 6-month savings program that involved monthly matches, and a final reward for completing the savings program without any interim withdrawals. Each participant’s total savings at the end of this program was the main asset building indicator considered. In addition, a new effective use of average pump bet (EUAPB) indicator was developed to characterize each participant’s ability to place winning bets. This indicator takes the ratio of each participant’s total BART earnings to average pump bet (APB) in all 60 trials. Our findings indicated that EUAPB explained more than a third of the variation in total savings among participants. Moreover, participants who managed to obtain BART earnings of at least 30 cents out of their APB, also tended to exhibit better asset building (savings) behavior. In particular, using this criterion to separate participants into high and low EUAPB groups, the nine participants with high EUAPB (mean BART earnings of 35.64 cents per APB) ended up with higher mean total savings ($255.11), while the 15 participants with low EUAPB (mean BART earnings of 22.50 cents per APB) obtained lower mean total savings ($40.01). All mean differences are statistically significant (2-tailed p  .0001) indicating that the relation between higher EUAPB and higher total savings is robust. Overall, these findings can help refine asset building interventions implemented by policy makers and practitioners interested in reducing financial vulnerability among low-SES population. Specifically, by helping identify individuals who are likely to readily take advantage of savings opportunities (such as matched savings programs) and avoiding the stipulation of unnecessary and expensive financial coaching programs to these individuals. This study was funded by J.P. Morgan Chase (JPMC) and carried out by scientists from Florida International University (FIU) in partnership with Catalyst Miami.

Keywords: balloon analogue risk task (BART), matched savings programs, asset building capability, low-SES participants

Procedia PDF Downloads 120
109 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 77
108 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 154
107 Using Low-Calorie Gas to Generate Heat and Electricity

Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko

Abstract:

The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.

Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization

Procedia PDF Downloads 240
106 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective

Authors: Julia A. Ermakova

Abstract:

In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.

Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective

Procedia PDF Downloads 62
105 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms

Authors: Hai L. Tran

Abstract:

When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.

Keywords: data, narrative, number, anecdote, storytelling, news

Procedia PDF Downloads 57
104 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 381
103 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 19
102 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital

Authors: Maryamsadat Habibi

Abstract:

Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.

Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department

Procedia PDF Downloads 55
101 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 43
100 Challenges and Lessons of Mentoring Processes for Novice Principals: An Exploratory Case Study of Induction Programs in Chile

Authors: Carolina Cuéllar, Paz González

Abstract:

Research has shown that school leadership has a significant indirect effect on students’ achievements. In Chile, evidence has also revealed that this impact is stronger in vulnerable schools. With the aim of strengthening school leadership, public policy has taken up the challenge of enhancing capabilities of novice principals through the implementation of induction programs, which include a mentoring component, entrusting the task of delivering these programs to universities. The importance of using mentoring or coaching models in the preparation of novice school leaders has been emphasized in the international literature. Thus, it can be affirmed that building leadership capacity through partnership is crucial to facilitate cognitive and affective support required in the initial phase of the principal career, gain role clarification and socialization in context, stimulate reflective leadership practice, among others. In Chile, mentoring is a recent phenomenon in the field of school leadership and it is even more new in the preparation of new principals who work in public schools. This study, funded by the Chilean Ministry of Education, sought to explore the challenges and lessons arising from the design and implementation of mentoring processes which are part of the induction programs, according to the perception of the different actors involved: ministerial agents, university coordinators, mentors and novice principals. The investigation used a qualitative design, based on a study of three cases (three induction programs). The sources of information were 46 semi-structured interviews, applied in two moments (at the beginning and end of mentoring). Content analysis technique was employed. Data focused on the uniqueness of each case and the commonalities within the cases. Five main challenges and lessons emerged in the design and implementation of mentoring within the induction programs for new principals from Chilean public schools. They comprised the need of (i) developing a shared conceptual framework on mentoring among the institutions and actors involved, which helps align the expectations for the mentoring component within the induction programs, along with assisting in establishing a theory of action of mentoring that is relevant to the public school context; (ii) recognizing trough actions and decisions at different levels that the role of a mentor differs from the role of a principal, which challenge the idea that an effective principal will always be an effective mentor; iii) improving mentors’ selection and preparation processes trough the definition of common guiding criteria to ensure that a mentor takes responsibility for developing critical judgment of novice principals, which implies not limiting the mentor’s actions to assist in the compliance of prescriptive practices and standards; (iv) generating common evaluative models with goals, instruments and indicators consistent with the characteristics of mentoring processes, which helps to assess expected results and impact; and (v) including the design of a mentoring structure as an outcome of the induction programs, which helps sustain mentoring within schools as a collective professional development practice. Results showcased interwoven elements that entail continuous negotiations at different levels. Taking action will contribute to policy efforts aimed at professionalizing the leadership role in public schools.

Keywords: induction programs, mentoring, novice principals, school leadership preparation

Procedia PDF Downloads 102
99 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear

Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho

Abstract:

The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.

Keywords: prestressed hollow core slabs, shear, strut, tie models

Procedia PDF Downloads 303
98 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty

Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)

Abstract:

In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.

Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator

Procedia PDF Downloads 49
97 Invasion of Scaevola sericea (Goodeniaceae) in Cuba: Invasive Dynamic and Density-Dependent Relationship with the Native Species Tournefortia gnaphalodes (Boraginaceae)

Authors: Jorge Ferro-Diaz, Lazaro Marquez-Llauger, Jose Alberto Camejo-Lamas, Lazaro Marquez-Govea

Abstract:

The invasion of Scaevola sericea Vahl (Goodeniaceae) in Cuba is a recent process, this exotic invasive species was reported for the first time, in the national territory, by 2008. S. sericea is native to the coasts around the Indian Ocean and western Pacific, common on sandy beaches; it has expanded rapidly around the planet by either natural or anthropic causes, mainly due to its use in hotel gardening. Cuba is highly vulnerable to the colonization of these species, mainly due to tropical hurricanes which have increased in the last decades; it also affects other native species such as Tournefortia gnaphalodes (L.) R. Br. (Boraginaceae) that show invasive manifestations because of the unbalanced state of demographic processes of littoral vegetation, which has been studied by authors during the last 10 years. The fast development of Cuban tourism has encouraged the use of exotic species in gardening that invade large sectors of sandy coasts. Taking into account the importance of assessing the impacts dimensions and adopting effective control measures, a monitoring program for the invasion of S. sericea in Cuba was undertaken. The program has been implemented since 2013 and the main objective was to identify invasive patterns and interactions with other native species of coastal vegetation. This experience also aimed to validate the design and propose a standardized monitoring protocol to be applied throughout the country. In the Cuban territory, 12 sites were chosen, where there were established 24 permanent plots of 100 m2; measurements were taken twice a year taking into consideration variables such as abundance, plant height, soil cover, flora and companion vegetation, density and frequency; other physical variables of the beaches were also measured. Similarly, for associated individuals of T. gnaphalodes, the same variables were measured. The results of these first four years allowed us to document patterns of S. sericea invasion, highlighting the use of adventitious roots to enhance their colonization, and to characterize demographic indicators, ecosystem affections, and interactions with native plants. A density-dependent relationship with T. gnaphalodes was documented, finding a controlling effect on S. sericea, so that a manipulation experiment was applied to evaluate possible management actions to be incorporated in the Plans of the protected areas involved. With these results, it was concluded, for the evaluated sites, that S. sericea has had an invasion dynamics ruled by effects of coastal dynamics, more intense in beaches with affectations to the native vegetation, and more controlled in beaches with more preserved vegetation. It was found that when S. sericea is established, the mechanism that most reinforces its invasion is the use of adventitious roots, used to expand the patches and colonize beach sectors. It was also found that when the density of T. gnaphalodes increases, it detains the expansion of S. sericea and reduces its colonization possibilities, behaving as a natural controller of its biological invasion. The results include a proposal of a new Monitoring Protocol for Scaevola sericea in Cuba, with the possibility of extending its implementation to other countries in the region.

Keywords: biological invasion, exotic invasive species, plant interactions, Scaevola sericea

Procedia PDF Downloads 195
96 Influence of Temperature and Immersion on the Behavior of a Polymer Composite

Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli

Abstract:

This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.

Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical

Procedia PDF Downloads 92
95 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 9
94 Bio-Hub Ecosystems: Expansion of Traditional Life Cycle Analysis Metrics to Include Zero-Waste Circularity Measures

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, a new set of metrics and measurement system is needed to better quantify the environmental, social and economic impacts of circular zero-waste design. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. Lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. In particular, the forestry-based plants which have been an invaluable outlet for woody biomass surplus, forest health improvement, timber production enhancement, and especially reduction of wildfire risk. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. It proposes not only models for integration of forestry, aquaculture, and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. Typically, life cycle analyses measure environmental impacts of different industrial production stages and are not integrated with indicators of material use circularity. This concept paper proposes the further development of a new set of metrics that would illustrate not only the typical life-cycle analysis (LCA), which shows the reduction in greenhouse gas (GHG) emissions, but also the zero-waste circularity measures of mass balance of the full value chain of the raw material and energy content/caloric value. These new measures quantify key impacts in making hyper-efficient use of natural resources and eliminating waste to landfills. The project utilized traditional LCA using the GREET model where the standalone biomass energy plant case was contrasted with the integration of a jet-fuel biorefinery. The methodology was then expanded to include combinations of co-hosts that optimize the life cycle of woody biomass from tree to energy, CO₂, heat and wood ash both from an energy/caloric value and for mass balance to include reuse of waste streams which are typically landfilled. The major findings of both a formal LCA study resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. If proven as a model, the expedited roll-out of these innovative scenarios can set a new standard for circular zero-waste projects that advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable bio-economy paradigm where waste streams become valuable inputs, supporting local and rural communities in simple, sustainable ways.

Keywords: bio-economy, biomass energy, financing, metrics

Procedia PDF Downloads 135
93 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 90
92 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 195
91 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage

Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel

Abstract:

Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.

Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment

Procedia PDF Downloads 305