Search results for: Thermal Performance
2215 Predictive Modeling of Bridge Conditions Using Random Forest
Authors: Miral Selim, May Haggag, Ibrahim Abotaleb
Abstract:
The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.Keywords: data analysis, random forest, predictive modeling, bridge management
Procedia PDF Downloads 242214 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol
Authors: Idan Chiyanzu, Maruping Mangena
Abstract:
Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization
Procedia PDF Downloads 2802213 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1232212 Effectiveness of Management Transfer Programs for Managing Irrigation Resources in Developing Countries: A Case Study of Farmer- and Agency-Managed Schemes from Nepal
Authors: Tirtha Raj Dhakal, Brian Davidson, Bob Farquharson
Abstract:
Irrigation management transfer has been taken as the important policy instrument for effective irrigation resource management in many developing countries. The change in governance of the irrigation schemes for its day-to-day operation and maintenance has been centered in recent Nepalese irrigation policies also. However, both farmer- and agency-managed irrigation schemes in Nepal are performing well below than expected. This study tries to link the present concerns of poor performance of both forms of schemes with the institutions for its operation and management. Two types of surveys, management and farm surveys; were conducted as a case study in the command area of Narayani Lift Irrigation Project (agency-managed) and Khageri Irrigation System (farmer-managed) of Chitwan District. The farm survey from head, middle and tail regions of both schemes revealed that unequal water distribution exists in these regions in both schemes with greater percentage of farmers experiencing this situation in agency managed scheme. In both schemes, the cost recovery rate was very low, even below five percent in Lift System indicating poor operation and maintenance of the schemes. Also, the institution on practice in both schemes is unable to create any incentives for farmers’ willingness to pay as well as for its economical use in the farm. Thus, outcomes from the study showed that only the management transfer programs may not achieve the goal of efficient irrigation resource management. This may suggest water professionals to rethink about the irrigation policies for refining institutional framework irrespective of the governance of schemes for improved cost recovery and better water distribution throughout the irrigation schemes.Keywords: cost recovery, governance, institution, irrigation management transfer, willingness to pay
Procedia PDF Downloads 2932211 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 1302210 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC
Authors: Mohamed Zellagui, Heba Ahmed Hassan
Abstract:
This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method
Procedia PDF Downloads 5022209 Transforming Healthcare Delivery: Technological Infrastructure for Decentralized Patient-Centric Ecosystems Through Comprehensive Digital Platform Analysis and Strategic Intervention
Authors: Munachiso A. Muoneke
Abstract:
The global healthcare system faces unprecedented challenges of fragmented information systems, inefficient data management, and limited service accessibility. With the rapid evolution of digital technologies, there is a dire need to develop integrated technological infrastructures that can reinvent healthcare delivery from its core. This research addresses the significant gap between existing healthcare technologies and the growing demand for more responsive, secure, and patient-centered medical services. The study uses a mixed-method research approach, combining quantitative system performance analysis and qualitative healthcare provider interviews to comprehensively evaluate the limitations of the current digital health infrastructure. Across multiple healthcare settings, technological barriers are mapped to help develop a robust framework for assessing and redesigning digital health platforms. The research reveals significant potential for technology to transform healthcare delivery. It reveals that strategic technological interventions can reduce administrative inefficiencies by up to 40%, improving patient data security and creating more responsive healthcare ecosystems. The research shows how integrated digital infrastructure can bridge existing gaps in healthcare service delivery, improving patient access and healthcare provider coordination. By providing a practical framework for the transformation of digital health infrastructure, this research offers actionable insights for healthcare providers, technology developers and policy makers seeking to modernize healthcare service delivery in an increasingly digital world.Keywords: decentralized healthcare, digital health, healthcare systems transformation, patient-centric technology
Procedia PDF Downloads 62208 The Use of Microbiological Methods to Reduce Aflatoxin M1 in Cheese
Authors: Bruna Goncalves, Jennifer Henck, Romulo Uliana, Eliana Kamimura, Carlos Oliveira, Carlos Corassin
Abstract:
Studies have shown evidence of human exposure to aflatoxin M1 due to the consumption of contaminated milk and dairy products (mainly cheeses). This poses a great risk to public health, since milk and milk products are frequently consumed by a portion of the population considered immunosuppressed, children and the elderly. Knowledge of the negative impacts of aflatoxins on health and economics has led to investigations of strategies to prevent their formation in food, as well as to eliminate, inactivate or reduce the bioavailability of these toxins in contaminated products This study evaluated the effect of microbiological methods using lactic acid bacteria on aflatoxin M1 (AFM1) reduction in Minas Frescal cheese (typical Brazilian product, being among the most consumed cheeses in Brazil) spiked with 1 µg/L AFM1. Inactivated lactic acid bacteria (0,5%, v/v de L. rhamnosus e L. lactis) were added during the cheese production process. Nine cheeses were produced, divided into three treatments: negative controls (without AFM1 or lactic acid bacteria), positive controls (AFM1 only), and lactic acid bacteria + AFM1. Samples of cheese were collected on days 2, 10, 20 and 30 after the date of production and submitted to composition analyses and determination of AFM1 by high-performance liquid chromatography. The reductions of AFM1 in cheese by lactic acid bacteria at the end of the trial indicate a potential application of inactivated lactic acid bacteria in reducing the bioavailability of AFM1 in Minas frescal cheese without physical-chemical and microbiological modifications during the 30-day experimental period. The authors would like to thank São Paulo Research Foundation – FAPESP (grants #2017/20081-6 and #2017/19683-1).Keywords: aflatoxin, milk, minas frescal cheese, decontamination
Procedia PDF Downloads 1962207 Assessing Female Students' Understanding of the Solar System Concepts by Implementing I-Cube Technology
Authors: Elham Ghazi Mohammad
Abstract:
This study examined the female students’ understanding for the solar system concepts through the utilization of the I-Cube technology as a virtual reality technology. The study conducted in Qatar University for samples of students of eighth and ninth preparatory grade students in the State of Qatar. The research framework comprises designated quantitative research designs and methods of data collection and analysis including pre- and post-conceptual exams. This research based on experimental method design that focuses on students’ performance and conceptual questions. A group of 120 students from the eighth and ninth groups were divided into two pools of 60 students each, where the two 60-student groups represent the designated control and treatment groups. It must be mentioned that the students were selected randomly from the eighth and ninth grades. The solar system lesson of interest was taught by teacher candidates (senior students at the college of Education at QU), who taught both the experimental group (integrating I-cube) in virtual lab in Qatar University and control group (without integrating this technology) in one of independent school in the State of Qatar. It is noteworthy to mention that the students usually face some difficulties to learn by imagining real situation such as solar system and inner planet lesson. Collected data was statistically analyzed using one-way ANOVA and one-way ANCOVA using SPSS Statistics. The obtained results revealed that integrating I-Cube technology has significantly enhanced female students’ conceptual understanding of the solar system. Interestingly, our findings demonstrated the applicability of utilizing integrating I-Cube technology toward enhancing the students’ understanding regarding subjects of interests within the landscapes of basic sciences.Keywords: virtual lab, integrating technology, I-Cube, solar system
Procedia PDF Downloads 2412206 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico
Authors: M. Gil, R. Montalvo
Abstract:
Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.Keywords: business intelligence, predictive model, supply and demand, Mexico
Procedia PDF Downloads 1232205 The Connection between Body Composition and Blood Samples Results in Aesthetic Sports
Authors: Réka Kovács, György Téglásy, Szilvia Boros
Abstract:
Introduction: Aim of the Study: Low body fat percentage frequently occurs in aesthetic sports. Because of the unrealistic expectations, their quantity and quality of nutrition intake are inadequate. This can be linked to several health issues which appear in blood samples (iron, ferritin, creatine kinase, etc.). Our retrospective study aimed to investigate the connection between body composition (InBody 770 monitor) and blood samples test results among elite adolescent (14-18 years) and adult (19-28 years) aesthetic athletes. Methods: Data collection happened between 01.08.2022. and 15.08.2022 in National Institute for Sports Medicine, Budapest. The final group consisted of 111 athletes (n=111; adolescents: n=68, adults: n=43). We used descriptive statistics, a two-sample t-test, and correlation analysis with Microsoft Office Excel 2007 software. Our results were considered significant if p<0,05. Results: In 33,3% (37/111) we found low body fat percentage (girls and women: <12%, boys and men: <8%) and in 64% (71/111) high creatine kinase levels. Differences were found mainly in the adolescent group. We found a correlation between body weight and total cholesterol, visceral fat and triglyceride, hematocrit and iron-linking capacity, moreover body fat percentage and ferritin levels. Discussion: It is important to start education about sports nutrition at an early age. The connection between low body fat percentage, serum iron, triglyceride, and ferritin levels refers to the fact that the nutrition intake of the athletes is inadequate. High blood concentrations of creatine kinase may show a lack of proper recovery, which is essential to improve health and performance.Keywords: body fat percentage, creatine kinase, recovery, sports nutrition
Procedia PDF Downloads 1292204 Agronomic Evaluation of Flax Cultivars (Linum Usitatissimum L.) in Response to Irrigation Intervals
Authors: Emad Rashwan, M. Mousa, Ayman EL Sabagh, Celaleddin Barutcular
Abstract:
Flax is a potential winter crop for Egypt that can be grown for both seed and fiber. The study was conducted during two successive winter seasons of 2013/2014, and 2014/2015 in the experimental farm of El-Gemmeiza Agricultural Research Station, Agriculture research Centre, Egypt. The objective of this work was to evaluate the effect of irrigation intervals (25, 35 and 45) on the seed yield and quality of flax cultivars (Sakha1, Giza9 and Giza10). Obtained results indicate that highly significant for all studied traits among irrigation intervals except oil percentage that was not significant in both seasons. Irrigated flax plants every 35 days gave the maximum values for all characters. In contrast, irrigation every 45 days gave the minimum values for all studied characters under this study. In respect to cultivars, significant differences in most yield and quality characters were found. Furthermore, the performance of Sakha1 cultivar was superior in total plant height, main stem diameter, seed index, seed, oil, biological and straw yield /ha as well as fiber length and fiber fineness. Meanwhile, Giza9 and Giza10 cultivars were surpassed in fiber yield/hand fiber percentage, respectively. The interactions between irrigation intervals and flax cultivars were highly significant for total plant height, main stem diameter, seed, oil, biological and straw yields /ha. Based on the results, all flax cultivars recorded the maximum values for major traits were measured under irrigation of flax plants every 35 days.Keywords: flax, fiber, irrigation intervals, oil, seed yield
Procedia PDF Downloads 2552203 Unfolding the Affective Atmospheres during the COVID-19 Pandemic Crisis: The Constitution and Performance of Affective Governance in Taiwan
Authors: Sang-Ju Yu
Abstract:
This paper examines the changing essences and effects of ‘affective atmosphere’ during the COVID-19 pandemic crisis, which have been facilitated and shaped the ‘affective governance’ in Taiwan. Due to long-term uncertainty and unpredictability, the COVID-19 pandemic not only caused unprecedented global crisis but triggered the public’s negative emotional responses. This paper unravels how the shortage of Personal Protective Equipment and the proliferating fake news heightened people’s fear and anxiety and how specific affective atmospheres can be provoked and manipulated to harness emotional appeals of citizens strategically in Taiwan. Through the in-depth interviews with diverse stakeholders involved, it unfolds the dynamics and strategies of affective governance, wherein public emotions and concerns are now given significant consideration in both policy measures and the affective expression of leadership, spatial arrangement, service delivery, and the interaction with citizens. Addressing psychosocial and emotional needs has become the core of crisis response mechanisms suited to dynamic affective atmospheres and pandemic situation. This paper also demonstrates that epidemic prevention and control is not merely the production of neutral or rational policy-making processes, as it is dominated by multiple emotions resulted from unexpected and salient events at different moments. It provides explicit insight into how different prevention scenarios operated effectively through political and affective mobilisation to strengthen emotional bonding and collective identity which energises collective action. Basically, successful affective governance calls for both negative and positive emotions, for both scientific and political decision-making, for both community and bureaucracy, and both quality and efficiency of private–public collaboration.Keywords: affective atmospheres, affective governance, COVID-19 pandemic, private-public collaboration
Procedia PDF Downloads 982202 Comparison of Radiation Dosage and Image Quality: Digital Breast Tomosynthesis vs. Full-Field Digital Mammography
Authors: Okhee Woo
Abstract:
Purpose: With increasing concern of individual radiation exposure doses, studies analyzing radiation dosage in breast imaging modalities are required. Aim of this study is to compare radiation dosage and image quality between digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM). Methods and Materials: 303 patients (mean age 52.1 years) who studied DBT and FFDM were retrospectively reviewed. Radiation dosage data were obtained by radiation dosage scoring and monitoring program: Radimetrics (Bayer HealthCare, Whippany, NJ). Entrance dose and mean glandular doses in each breast were obtained in both imaging modalities. To compare the image quality of DBT with two-dimensional synthesized mammogram (2DSM) and FFDM, 5-point scoring of lesion clarity was assessed and the better modality between the two was selected. Interobserver performance was compared with kappa values and diagnostic accuracy was compared using McNemar test. The parameters of radiation dosages (entrance dose, mean glandular dose) and image quality were compared between two modalities by using paired t-test and Wilcoxon rank sum test. Results: For entrance dose and mean glandular doses for each breasts, DBT had lower values compared with FFDM (p-value < 0.0001). Diagnostic accuracy did not have statistical difference, but lesion clarity score was higher in DBT with 2DSM and DBT was chosen as a better modality compared with FFDM. Conclusion: DBT showed lower radiation entrance dose and also lower mean glandular doses to both breasts compared with FFDM. Also, DBT with 2DSM had better image quality than FFDM with similar diagnostic accuracy, suggesting that DBT may have a potential to be performed as an alternative to FFDM.Keywords: radiation dose, DBT, digital mammography, image quality
Procedia PDF Downloads 3512201 Beyond Diagnosis: Innovative Instructional Methods for Children with Multiple Disabilities
Authors: Patricia Kopetz
Abstract:
Too often our youngest children with disabilities receive diagnostic labels and accompanying treatment plans based upon perceptions that the children are of limited aptitude and/or ambition. However, children of varied-ability levels who are diagnosed with ‘multiple disabilities,’ can participate and excel in school-based instruction that aligns with their desires, interests, and fortitude – criteria components not foretold by scores on standardized assessments. The paper represents theoretical work in Special Education Innovative Instruction, and includes presenting research materials, some developed by the author herself. The majority of students with disabilities are now served in general education settings in the United States, embracing inclusive practices in our schools. ‘There is now a stronger call for special education to step up and improve efficiency, implement evidence-based practices, and provide greater accountability on key performance indicators that support successful academic and post-school outcomes for students with disabilities.’ For example, in the United States, the Office of Special Education Programs (OSEP) is focusing on results-driven indicators to improve outcomes for students with disabilities. School personnel are appreciating the implications of research-driven approaches for students diagnosed with multiple disabilities, and aim to align their practices toward such focus. The paper presented will provide updates on current theoretical principles and perspectives, and explore advancements in latest, evidence-based and results-driven instructional practices that can motivate children with multiple disabilities to advance their skills and engage in learning activities that as nonconventional, innovative, and proven successful.Keywords: childhood special education, educational technology , innovative instruction, multiple disabilities
Procedia PDF Downloads 2502200 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology
Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi
Abstract:
In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.
Procedia PDF Downloads 1072199 A Comparative Analysis of (De)legitimation Strategies in Selected African Inaugural Speeches
Authors: Lily Chimuanya, Ehioghae Esther
Abstract:
Language, a versatile and sophisticated tool, is fundamentally sacrosanct to mankind especially within the realm of politics. In this dynamic world, political leaders adroitly use language to engage in a strategic show aimed at manipulating or mechanising the opinion of discerning people. This nuanced synergy is marked by different rhetorical strategies, meticulously synced with contextual factors ranging from cultural, ideological, and political to achieve multifaceted persuasive objectives. This study investigates the (de)legitimation strategies inherent in African presidential inaugural speeches, as African leaders not only state their policy agenda through inaugural speeches but also subtly indulge in a dance of legitimation and delegitimation, performing a twofold objective of strengthening the credibility of their administration and, at times, undermining the performance of the past administration. Drawing insights from two different legitimation models and a dataset of 4 African presidential inaugural speeches obtained from authentic websites, the study describes the roles of authorisation, rationalisation, moral evaluation, altruism, and mythopoesis in unmasking the structure of political discourse. The analysis takes a mixed-method approach to unpack the (de)legitimation strategy embedded in the carefully chosen speeches. The focus extends beyond a superficial exploration and delves into the linguistic elements that form the basis of presidential discourse. In conclusion, this examination goes beyond the nuanced landscape of language as a potent tool in politics, with each strategy contributing to the overall rhetorical impact and shaping the narrative. From this perspective, the study argues that presidential inaugural speeches are not only linguistic exercises but also viable weapons that influence perceptions and legitimise authority.Keywords: CDA, legitimation, inaugural speeches, delegitmation
Procedia PDF Downloads 702198 The Role of Structure Input in Pi in the Acquisition of English Relative Clauses by L1 Saudi Arabic Speakers
Authors: Faraj Alhamami
Abstract:
The effects of classroom input through structured input activities have been addressing two main lines of inquiry: (1) measuring the effects of structured input activities as a possible causative factor of PI and (2) comparing structured input practice versus other types of instruction or no-training controls. This line of research, the main purpose of this classroom-based research, was to establish which type of activities is the most effective in processing instruction, whether it is the explicit information component and referential activities only or the explicit information component and affective activities only or a combination of the two. The instruments were: a) grammatical judgment task, b) Picture-cued task, and c) a translation task as pre-tests, post-tests and delayed post-tests seven weeks after the intervention. While testing is ongoing, preliminary results shows that the examination of participants' pre-test performance showed that all five groups - the processing instruction including both activities (RA), Traditional group (TI), Referential group (R), Affective group (A), and Control group - performed at a comparable chance or baseline level across the three outcome measures. However, at the post-test stage, the RA, TI, R, and A groups demonstrated significant improvement compared to the Control group in all tasks. Furthermore, significant difference was observed among PI groups (RA, R, and A) at post-test and delayed post-test on some of the tasks when compared to traditional group. Therefore, the findings suggest that the use of the sole application and/or the combination of the structured input activities has succeeded in helping Saudi learners of English make initial form-meaning connections and acquire RRCs in the short and the long term.Keywords: input processing, processing instruction, MOGUL, structure input activities
Procedia PDF Downloads 802197 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 242196 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 1652195 Identification of Risks Associated with Process Automation Systems
Authors: J. K. Visser, H. T. Malan
Abstract:
A need exists to identify the sources of risks associated with the process automation systems within petrochemical companies or similar energy related industries. These companies use many different process automation technologies in its value chain. A crucial part of the process automation system is the information technology component featuring in the supervisory control layer. The ever-changing technology within the process automation layers and the rate at which it advances pose a risk to safe and predictable automation system performance. The age of the automation equipment also provides challenges to the operations and maintenance managers of the plant due to obsolescence and unavailability of spare parts. The main objective of this research was to determine the risk sources associated with the equipment that is part of the process automation systems. A secondary objective was to establish whether technology managers and technicians were aware of the risks and share the same viewpoint on the importance of the risks associated with automation systems. A conceptual model for risk sources of automation systems was formulated from models and frameworks in literature. This model comprised six categories of risk which forms the basis for identifying specific risks. This model was used to develop a questionnaire that was sent to 172 instrument technicians and technology managers in the company to obtain primary data. 75 completed and useful responses were received. These responses were analyzed statistically to determine the highest risk sources and to determine whether there was difference in opinion between technology managers and technicians. The most important risks that were revealed in this study are: 1) the lack of skilled technicians, 2) integration capability of third-party system software, 3) reliability of the process automation hardware, 4) excessive costs pertaining to performing maintenance and migrations on process automation systems, and 5) requirements of having third-party communication interfacing compatibility as well as real-time communication networks.Keywords: distributed control system, identification of risks, information technology, process automation system
Procedia PDF Downloads 1402194 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence
Procedia PDF Downloads 3322193 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods
Procedia PDF Downloads 372192 Another Beautiful Sounds: Building the Memory of Sound of Peddling in Beijing with Digital Technology
Authors: Dan Wang, Qing Ma, Xiaodan Wang, Tianjiao Qi
Abstract:
The sound of peddling in Beijing, also called “yo-heave-ho” or “cry of one's ware”, is a unique folk culture and usually found in Beijing hutong. For the civilians in Beijing, sound of peddling is part of their childhood. And for those who love the traditional culture of Beijing, it is an old song singing the local conditions and customs of the ancient city. For example, because of his great appreciation, the British poet Osbert Stewart once put sound of peddling which he had heard in Beijing as a street orchestra performance in the article named "Beijing's sound and color".This research aims to collect and integrate the voice/photo resources and historical materials of sound concerning peddling in Beijing by digital technology in order to protect the intangible cultural heritage and pass on the city memory. With the goal in mind, the next stage is to collect and record all the materials and resources based on the historical documents study and interviews with civilians or performers. Then set up a metadata scheme (which refers to the domestic and international standards such as "Audio Data Processing Standards in the National Library", DC, VRA, and CDWA, etc.) to describe, process and organize the sound of peddling into a database. In order to fully show the traditional culture of sound of peddling in Beijing, web design and GIS technology are utilized to establish a website and plan holding offline exhibitions and events for people to simulate and learn the sound of peddling by using VR/AR technology. All resources are opened to the public and civilians can share the digital memory through not only the offline experiential activities, but also the online interaction. With all the attempts, a multi-media narrative platform has been established to multi-dimensionally record the sound of peddling in old Beijing with text, images, audio, video and so on.Keywords: sound of peddling, GIS, metadata scheme, VR/AR technology
Procedia PDF Downloads 3062191 Biogenic Amines Production during RAS Cheese Ripening
Authors: Amr Amer
Abstract:
Cheeses are among those high-protein-containing foodstuffs in which enzymatic and microbial activities cause the formation of biogenic amines from amino acids decarboxylation. The amount of biogenic amines in cheese may act as a useful indicator of the hygienic quality of the product. In other words, their presence in cheese is related to its spoilage and safety. Formation of biogenic amines during Ras cheese (Egyptian hard cheese) ripening was investigated for 4 months. Three batches of Ras cheese were manufactured using Egyptian traditional method. From each batch, Samples were collected at 1, 7, 15, 30, 60, 90 and 120 days after cheese manufacture. The concentrations of biogenic amines (Tyramine, Histamine, Cadaverine and Tryptamine) were analyzed by high performance liquid chromatography (HPLC). There was a significant increased (P<0.05) in Tyramine levels from 4.34± 0.07 mg|100g in the first day of storage till reached 88.77± 0.14 mg|100g at a 120-day of storage. Also, Histamine and Cadaverine levels had the same increased pattern of Tyramine reaching 64.94± 0.10 and 28.28± 0.08 mg|100g in a 120- day of storage, respectively. While, there was a fluctuation in the concentration of Tryptamine level during ripening period as it decreased from 3.24± 0.06 to 2.66± 0.11 mg|100g at 60-day of storage then reached 5.38±0.08 mg|100g in a 120- day of storage. Biogenic amines can be formed in cheese during production and storage: many variables, as pH, salt concentration, bacterial activity as well as moisture, storage temperature and ripening time, play a relevant role in their formation. Comparing the obtained results with the recommended standard by Food and Drug Administration "FDA" (2001), High levels of biogenic amines in various Ras cheeses consumed in Egypt exceeded the permissible value (10 mg%) which seemed to pose a threat to public health. In this study, presence of high concentrations of biogenic amines (Tyramine, Histamine, cadaverine and Tryptamine) in Egyptian Ras cheeses reflects the bad hygienic conditions under which they produced and stored. Accordingly, the levels of biogenic amines in different cheeses should be come in accordance with the safe permissible limit recommended by FDA to ensure human safety.Keywords: Ras cheese, biogenic amines, tyramine, histamine, cadaverine
Procedia PDF Downloads 4372190 Innovation of a New Plant Tissue Culture Medium for Large Scale Plantlet Production in Potato (Solanum tuberosum L.)
Authors: Ekramul Hoque, Zinat Ara Eakut Zarin, Ershad Ali
Abstract:
The growth and development of explants is governed by the effect of nutrient medium. Ammonium nitrate (NH4NO3) as a major salt of stock solution-1 for the preparation of tissue culture medium. But, it has several demerits on human civilization. It is use for the preparation of bomb and other destructive activities. Hence, it is totally ban in our country. A new chemical was identified as a substitute of ammonium nitrate. The concentrations of the other ingredients of major and minor salt were modified from the MS medium. The formulation of new medium is totally different from the MS nutrient composition. The most widely use MS medium composition was used as first check treatment and MS powder (Duchefa Biocheme, The Netherland) was used as second check treatment. The experiments were carried out at the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh. Two potato varieties viz. Diamant and Asterix were used as experimental materials. The regeneration potentiality of potato onto new medium was best as compare with the two check treatments. The traits -node number, leaf number, shoot length, root lengths were highest in new medium. The plantlets were healthy, robust and strong as compare to plantlets regenerated from check treatments. Three subsequent sub-cultures were made in the new medium to observe the growth pattern of plantlet. It was also showed the best performance in all the parameter under studied. The regenerated plantlet produced good quality minituber under field condition. Hence, it is concluded that, a new plant tissue culture medium as discovered from the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh under the leadership of Professor Dr. Md. Ekramul Hoque.Keywords: new medium, potato, regeneration, ammonium nitrate
Procedia PDF Downloads 972189 Urine Neutrophil Gelatinase-Associated Lipocalin as an Early Marker of Acute Kidney Injury in Hematopoietic Stem Cell Transplantation Patients
Authors: Sara Ataei, Maryam Taghizadeh-Ghehi, Amir Sarayani, Asieh Ashouri, Amirhossein Moslehi, Molouk Hadjibabaie, Kheirollah Gholami
Abstract:
Background: Acute kidney injury (AKI) is common in hematopoietic stem cell transplantation (HSCT) patients with an incidence of 21–73%. Prevention and early diagnosis reduces the frequency and severity of this complication. Predictive biomarkers are of major importance to timely diagnosis. Neutrophil gelatinase associated lipocalin (NGAL) is a widely investigated novel biomarker for early diagnosis of AKI. However, no study assessed NGAL for AKI diagnosis in HSCT patients. Methods: We performed further analyses on gathered data from our recent trial to evaluate the performance of urine NGAL (uNGAL) as an indicator of AKI in 72 allogeneic HSCT patients. AKI diagnosis and severity were assessed using Risk–Injury–Failure–Loss–End-stage renal disease and AKI Network criteria. We assessed uNGAL on days -6, -3, +3, +9 and +15. Results: Time-dependent Cox regression analysis revealed a statistically significant relationship between uNGAL and AKI occurrence. (HR=1.04 (1.008-1.07), P=0.01). There was a relation between uNGAL day +9 to baseline ratio and incidence of AKI (unadjusted HR=.1.047(1.012-1.083), P<0.01). The area under the receiver-operating characteristic curve for day +9 to baseline ratio was 0.86 (0.74-0.99, P<0.01) and a cut-off value of 2.62 was 85% sensitive and 83% specific in predicting AKI. Conclusions: Our results indicated that increase in uNGAL augmented the risk of AKI and the changes of day +9 uNGAL concentrations from baseline could be of value for predicting AKI in HSCT patients. Additionally uNGAL changes preceded serum creatinine rises by nearly 2 days.Keywords: acute kidney injury, hemtopoietic stem cell transplantation, neutrophil gelatinase-associated lipocalin, Receiver-operating characteristic curve
Procedia PDF Downloads 4102188 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 542187 Primary School Teachers’ Conceptual and Procedural Knowledge of Rational Number and Its Effects on Pupils’ Achievement in Rational Numbers
Authors: R. M. Kashim
Abstract:
The study investigated primary school teachers’ conceptual and procedural knowledge of rational numbers and its effects on pupil’s achievement in rational numbers. Specifically, primary school teachers’ level of conceptual knowledge about rational numbers, primary school teachers’ level of procedural knowledge about rational numbers, and the effects of teachers conceptual and procedural knowledge on their pupils understanding of rational numbers in primary schools is investigated. The study was carried out in Bauchi metropolis in the Bauchi state of Nigeria. The design of the study was a multi-stage design. The first stage was a descriptive design. The second stage involves a pre-test, post-test only quasi-experimental design. Two instruments were used for the data collection in the study. These were Conceptual and Procedural knowledge test (CPKT) and Rational number achievement test (RAT), the population of the study comprises of three (3) mathematics teachers’ holders of Nigerian Certificate in Education (NCE) teaching primary six and 210 pupils in their intact classes were used for the study. The data collected were analyzed using mean, standard deviation, analysis of variance, analysis of covariance and t- test. The findings indicated that the pupils taught rational number by a teacher that has high conceptual and procedural knowledge understand and perform better than the pupil taught by a teacher who has low conceptual and procedural knowledge of rational number. It is, therefore, recommended that teachers in primary schools should be encouraged to enrich their conceptual knowledge of rational numbers. Also, the superiority performance of teachers in procedural knowledge in rational number should not become an obstruction of understanding. Teachers Conceptual and procedural knowledge of rational numbers should be balanced so that primary school pupils will have a view of better teaching and learning of rational number in our contemporary schools.Keywords: conceptual, procedural knowledge, rational number, pupils
Procedia PDF Downloads 4532186 Simultaneous Removal of Phosphate and Ammonium from Eutrophic Water Using Dolochar Based Media Filter
Authors: Prangya Ranjan Rout, Rajesh Roshan Dash, Puspendu Bhunia
Abstract:
With the aim of enhancing the nutrient (ammonium and phosphate) removal from eutrophic wastewater with reduced cost, a novel media based multistage bio filter with drop aeration facility was developed in this work. The bio filter was packed with a discarded sponge iron industry by product, ‘dolochar’ primarily to remove phosphate via physicochemical approach. In the multi stage bio-filter drop, aeration was achieved by the process of percolation of the gravity-fed wastewater through the filter media and dropping down of wastewater from stage to stage. Ammonium present in wastewater got adsorbed by the filter media and biomass grown on the filter media and subsequently, got converted to nitrate through biological nitrification in the aerobic condition, as realized by drop aeration. The performance of the bio-filter in treating real eutrophic wastewater was monitored for a period of about 2 months. The influent phosphate concentration was in the range of 16-19 mg/L, and ammonium concentration was in the range of 65-78 mg/L. The average nutrient removal efficiency observed during the study period were 95.2% for phosphate and 88.7% for ammonium, with mean final effluent concentration of 0.91, and 8.74 mg/L, respectively. Furthermore, the subsequent release of nutrient from the saturated filter media, after completion of treatment process has been undertaken in this study and thin layer funnel analytical test results reveal the slow nutrient release nature of spent dolochar, thereby, recommending its potential agricultural application. Thus, the bio-filter displays immense prospective for treating real eutrophic wastewater, significantly decreasing the level of nutrients and keeping the effluent nutrient concentrations at par with the permissible limit and more importantly, facilitating the conversion of the waste materials into usable ones.Keywords: ammonium removal, phosphate removal, multi-stage bio-filter, dolochar
Procedia PDF Downloads 194