Search results for: gender specific data
26959 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System
Authors: Abdul-Rahman Al-Ali
Abstract:
As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances
Procedia PDF Downloads 32926958 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 14926957 Statistical Analysis of Interferon-γ for the Effectiveness of an Anti-Tuberculous Treatment
Authors: Shishen Xie, Yingda L. Xie
Abstract:
Tuberculosis (TB) is a potentially serious infectious disease that remains a health concern. The Interferon Gamma Release Assay (IGRA) is a blood test to find out if an individual is tuberculous positive or negative. This study applies statistical analysis to the clinical data of interferon-gamma levels of seventy-three subjects who diagnosed pulmonary TB in an anti-tuberculous treatment. Data analysis is performed to determine if there is a significant decline in interferon-gamma levels for the subjects during a period of six months, and to infer if the anti-tuberculous treatment is effective.Keywords: data analysis, interferon gamma release assay, statistical methods, tuberculosis infection
Procedia PDF Downloads 30826956 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 14826955 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media
Authors: Jinghui Peng, Shanyu Tang, Jia Li
Abstract:
Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.Keywords: steganalysis, security, Fast Fourier Transform, streaming media
Procedia PDF Downloads 15126954 Fuzzy-Genetic Algorithm Multi-Objective Optimization Methodology for Cylindrical Stiffened Tanks Conceptual Design
Authors: H. Naseh, M. Mirshams, M. Mirdamadian, H. R. Fazeley
Abstract:
This paper presents an extension of fuzzy-genetic algorithm multi-objective optimization methodology that could effectively be used to find the overall satisfaction of objective functions (selecting the design variables) in the early stages of design process. The coupling of objective functions due to design variables in an engineering design process will result in difficulties in design optimization problems. In many cases, decision making on design variables conflicts with more than one discipline in system design. In space launch system conceptual design, decision making on some design variable (e.g. oxidizer to fuel mass flow rate O/F) in early stages of the design process is related to objective of liquid propellant engine (specific impulse) and Tanks (structure weight). Then, the primary application of this methodology is the design of a liquid propellant engine with the maximum specific impulse and cylindrical stiffened tank with the minimum weight. To this end, the design problem is established the fuzzy rule set based on designer's expert knowledge with a holistic approach. The independent design variables in this model are oxidizer to fuel mass flow rate, thickness of stringers, thickness of rings, shell thickness. To handle the mentioned problems, a fuzzy-genetic algorithm multi-objective optimization methodology is developed based on Pareto optimal set. Consequently, this methodology is modeled with the one stage of space launch system to illustrate accuracy and efficiency of proposed methodology.Keywords: cylindrical stiffened tanks, multi-objective, genetic algorithm, fuzzy approach
Procedia PDF Downloads 65826953 Fabrication and Characterization of Ceramic Matrix Composite
Authors: Yahya Asanoglu, Celaletdin Ergun
Abstract:
Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.Keywords: CMC, PIP, precursor, quartz
Procedia PDF Downloads 16326952 Privacy-Preserving Model for Social Network Sites to Prevent Unwanted Information Diffusion
Authors: Sanaz Kavianpour, Zuraini Ismail, Bharanidharan Shanmugam
Abstract:
Social Network Sites (SNSs) can be served as an invaluable platform to transfer the information across a large number of individuals. A substantial component of communicating and managing information is to identify which individual will influence others in propagating information and also whether dissemination of information in the absence of social signals about that information will be occurred or not. Classifying the final audience of social data is difficult as controlling the social contexts which transfers among individuals are not completely possible. Hence, undesirable information diffusion to an unauthorized individual on SNSs can threaten individuals’ privacy. This paper highlights the information diffusion in SNSs and moreover it emphasizes the most significant privacy issues to individuals of SNSs. The goal of this paper is to propose a privacy-preserving model that has urgent regards with individuals’ data in order to control availability of data and improve privacy by providing access to the data for an appropriate third parties without compromising the advantages of information sharing through SNSs.Keywords: anonymization algorithm, classification algorithm, information diffusion, privacy, social network sites
Procedia PDF Downloads 32326951 Perception of Secondary Schools’ Students on Computer Education in Federal Capital Territory (FCT-Abuja), Nigeria
Authors: Salako Emmanuel Adekunle
Abstract:
Computer education is referred to as the knowledge and ability to use computers and related technology efficiently, with a range of skills covering levels from basic use to advance. Computer continues to make an ever-increasing impact on all aspect of human endeavours such as education. With numerous benefits of computer education, what are the insights of students on computer education? This study investigated the perception of senior secondary school students on computer education in Federal Capital Territory (FCT), Abuja, Nigeria. A sample of 7500 senior secondary schools students was involved in the study, one hundred (100) private and fifty (50) public schools within FCT. They were selected by using simple random sampling technique. A questionnaire [PSSSCEQ] was developed and validated through expert judgement and reliability co-efficient of 0.84 was obtained. It was used to gather relevant data on computer education. Findings confirmed that the students in the FCT had positive perception on computer education. Some factors were identified that affect students’ perception on computer education. The null hypotheses were tested using t-test and ANOVA statistical analyses at 0.05 level of significance. Based on these findings, some recommendations were made which include competent teachers should be employed into all secondary schools; this will help students to acquire relevant knowledge in computer education, technological supports should be provided to all secondary schools; this will help the users (students) to solve specific problems in computer education and financial supports should be provided to procure computer facilities that will enhance the teaching and the learning of computer education.Keywords: computer education, perception, secondary school, students
Procedia PDF Downloads 47226950 The Effect of Nanotechnology Structured Water on Lower Urinary Tract Symptoms in Men with Benign Prostatic Hyperplasia: A Double-Blinded Randomized Study
Authors: Ali Kamal M. Sami, Safa Almukhtar, Alaa Al-Krush, Ismael Hama-Amin Akha Weas, Ruqaya Ahmed Alqais
Abstract:
Introduction and Objectives Lower urinary tract symptoms (LUTS) are common among men with benign prostatic hyperplasia (BPH). The combination of 5 alpha-reductase inhibitors and alpha-blockers has been used as a conservative treatment of male LUTS secondary to BPH. Nanotechnology structured water magnalife is a type of water that is produced by modulators and specific frequency and energy fields that transform ordinary water into this Nanowater. In this study, we evaluated the use of Nano-water with the conservative treatment and to see if it improves the outcome and gives better results in those patients with LUTS/BPH. Material and methods For a period of 3 months, 200 men with International Prostate Symptom Score (IPSS)≥13, maximum flow rate (Qmax)≤ 15ml/s, and prostate volume > 30 and <80 ccs were randomly divided into two groups. Group A 100 men were given Nano-water with the (tamsulosindutasteride) and group B 100 men were given ordinary bottled water with the (tamsulosindutasteride). The water bottles were unlabeled and were given in a daily dose of 20ml/kg body weight. Dutasteride 0.5mg and tamsulosin 0.4 mg daily doses. Both groups were evaluated for the IPSS, Qmax, Residual Urine (RU), International Index of Erectile Function–Erectile Function (IIEF-EF) domain at the beginning (baseline data), and at the end of the 3 months. Results Of the 200 men with LUTS who were included in this study, 193 men were followed, and 7 men dropped out of the study for different reasons. In group A which included 97 men with LUTS, IPSS decreased by 16.82 (from 20.47 to 6.65) (P<0.00001) and Qmax increased by 5.73 ml/s (from 11.71 to 17.44) (P<0.00001) and RU <50 ml in 88% of patients (P<0.00001) and IIEF-EF increased to 26.65 (from 16.85) (P<0.00001). While in group B, 96 men with LUTS, IPSS decreased by 8.74(from 19.59 to 10.85)(P<0.00001) and Qmax increased by 4.67 ml/s(from 10.74 to 15.41)(P<0.00001), RU<50 ml in 75% of patients (P<0.00001), and IIEF-EF increased to 21(from 15.87)(P<0.00001). Group A had better results than group B. IPSS in group A decreased to 6.65 vs 10.85 in group B(P<0.00001), also Qmax increased to 17.44 in group A vs 15.41 in group B(P<0.00001), group A had RU <50 ml in 88% of patients vs 75% of patients in group B(P<0.00001).Group A had better IIEF-EF which increased to 26.65 vs 21 in group B(P<0.00001). While the differences between the baseline data of both groups were statistically not significant. Conclusion The use of nanotechnology structured water magnalife gives a better result in terms of LUTS and scores in patients with BPH. This combination is showing improvements in IPSS and even in erectile function in those men after 3 months.Keywords: nano water, lower urinary tract symptoms, benign prostatic hypertrophy, erectile dysfunction
Procedia PDF Downloads 7526949 The Changes in Consumer Behavior and the Decision-making Process After Covid-19 in Greece
Authors: Markou Vasiliki, Serdaris Panagiotis
Abstract:
The consumer behavior and decision-making process of consumers is a process that is affected by the factor of uncertainty. The onslaught of the Covid 19 pandemic has changed the consumer decision-making process in many ways. This change can be seen both in the buying process (how and where they shop) but also in the types of goods and services they are looking for. In addition, due to the mainly economic uncertainty that came from this event, but also the effects on both society and the economy in general, new consumer behaviors were created. Traditional forms of shopping are no longer a primary choice, consumers have turned to digital channels such as e-commerce and social media to fulfill needs. The purpose of this particular article is to examine how much the consumer's decision-making process has been affected after the pandemic and if consumer behavior has changed. An online survey was conducted to examine the change in decision making. Essentially, the demographic factors that influence the decision-making process were examined, as well as the social and economic factors. The research is divided into two parts. The first part included a literature review of the research that has been carried out to identify the factors, and the second part where the empirical investigation was carried out using a questionnaire and was done electronically with the help of Google Forms. The questionnaire was divided into several sections. They included questions about consumer behavior, but mainly about how they make decisions today, whether those decisions have changed due to the pandemic, and whether those changes are permanent. Also, for decision-making, goods were divided into essential products, high-tech products, transactions with the state and others. Αbout 500 consumers aged between 18 and 75 participated in the research. The data was processed with both descriptive statistics and econometric models. The results showed that the consumer behavior and decision-making process has changed. Now consumers widely use the internet for shopping, consumer behaviors and consumer patterns have changed. Social and economic factors play an important role. Income, gender and other factors were found to be statistically significant. In addition, it is worth noting that the percentage who made purchases during the pandemic through the internet for the first time was remarkable and related to age. Essentially, the arrival of the pandemic caused uncertainty for individuals, mainly financial, and this affected the decision-making process. In addition, shopping through the internet is now the first choice, especially among young people, and it seems that it is about to become established.Keywords: consumer behavior, decision making, COVID-19, Greece, behavior change
Procedia PDF Downloads 5226948 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 46226947 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis
Authors: Sidi Yang, Haiyi Zhang
Abstract:
Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.Keywords: text mining, Twitter, topic model, sentiment analysis
Procedia PDF Downloads 18126946 Explaining Motivation in Language Learning: A Framework for Evaluation and Research
Authors: Kim Bower
Abstract:
Evaluating and researching motivation in language learning is a complex and multi-faceted activity. Various models for investigating learner motivation have been proposed in the literature, but no one model supplies a complex and coherent model for investigating a range of motivational characteristics. Here, such a methodological framework, which includes exemplification of sources of evidence and potential methods of investigation, is proposed. The process model for the investigation of motivation within language learning settings proposed is based on a complex dynamic systems perspective that takes account of cognition and affects. It focuses on three overarching aspects of motivation: the learning environment, learner engagement and learner identities. Within these categories subsets are defined: the learning environment incorporates teacher, course and group specific aspects of motivation; learner engagement addresses the principal characteristics of learners' perceived value of activities, their attitudes towards language learning, their perceptions of their learning and engagement in learning tasks; and within learner identities, principal characteristics of self-concept and mastery of the language are explored. Exemplifications of potential sources of evidence in the model reflect the multiple influences within and between learner and environmental factors and the possible changes in both that may emerge over time. The model was initially developed as a framework for investigating different models of Content and Language Integrated Learning (CLIL) in contrasting contexts in secondary schools in England. The study, from which examples are drawn to exemplify the model, aimed to address the following three research questions: (1) in what ways does CLIL impact on learner motivation? (2) what are the main elements of CLIL that enhance motivation? and (3) to what extent might these be transferable to other contexts? This new model has been tried and tested in three locations in England and reported as case studies. Following an initial visit to each institution to discuss the qualitative research, instruments were developed according to the proposed model. A questionnaire was drawn up and completed by one group prior to a 3-day data collection visit to each institution, during which interviews were held with academic leaders, the head of the department, the CLIL teacher(s), and two learner focus groups of six-eight learners. Interviews were recorded and transcribed verbatim. 2-4 naturalistic observations of lessons were undertaken in each setting, as appropriate to the context, to provide colour and thereby a richer picture. Findings were subjected to an interpretive analysis by the themes derived from the process model and are reported elsewhere. The model proved to be an effective and coherent framework for planning the research, instrument design, data collection and interpretive analysis of data in these three contrasting settings, in which different models of language learning were in place. It is hoped that the proposed model, reported here together with exemplification and commentary, will enable teachers and researchers in a wide range of language learning contexts to investigate learner motivation in a systematic and in-depth manner.Keywords: investigate, language-learning, learner motivation model, dynamic systems perspective
Procedia PDF Downloads 27326945 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map
Authors: G. Tamilpavai, C. Vishnuppriya
Abstract:
Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.Keywords: clustering, k-mers, longest common subsequence, SOM
Procedia PDF Downloads 27126944 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning
Authors: Kwaku Damoah
Abstract:
This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.
Procedia PDF Downloads 7426943 Comparative Analysis of the Treatment of Okra Seed and Soy Beans Oil with Crude Enzyme Extract from Malted Rice
Authors: Eduzor Esther, Uhiara Ngozi, Ya’u Abubakar Umar, Anayo Jacob Gabriel, Umar Ahmed
Abstract:
The study investigated the characteristic effect of treating okra seed and soybeans seed oil with crude enzymes extract from malted rice. The oils from okra seeds and soybeans were obtained by solvent extraction method using N-hexane solvent. Soybeans seeds had higher percentage oil yield than okra seed. 250ml of each oil was thoroughly mixed with 5ml of the malted rice extract at 400C for 5mins and then filtered and regarded as treated oil while another batch of 250ml of each oil was not mixed with the malted rice extract and regarded as untreated oil. All the oils were analyzed for specific gravity, refractive index, emulsification capacity, absortivity, TSS and viscosity. Treated okra seed and soybeans oil gave higher values for specific gravity, than the untreated oil for okra seed and soybeans oil respectively. The emulsification capacity values were also higher for treated oils, when compared to the untreated oil, for okra seed and soybeans oil respectively. Treated okra seed and soybeans oil also had higher range of values for absorptivity, than the untreated oil for okra seed and soybeans respectively. The ranges of T.S.S values of the treated oil were also higher, than those of the untreated oil for okra seed and soybeans respectively. The results of viscosity showed that the treated oil had higher values, than the untreated oil for okra seed and soybeans oil respectively. However, the results of refractive index showed that the untreated oils had higher values ranges of than the treated oils for okra seed and soybeans respectively. Treated oil show better quality in respect to the parameters analyst, except the refractive index which is slightly less but also is within the rangiest of standard, the oils are high in unsaturation especially okra oil when compared with soya beans oil. It is recommended that, treated oil of okra seeds and soya beans can serve better than many oils that presently in use such as ground nut oil, palm oil and cotton seeds oil.Keywords: extract, malted, oil, okra, rice, seed, soybeans
Procedia PDF Downloads 44826942 Potential of Palm Oil Mill Effluent in Algae Cultivation for Biodiesel Production
Authors: Nur Azreena Idris, Soh Kheang Loh, Harrison Lau Lik Nang, Yuen May Choo, Eminour Muzalina Mustafa, Vijaysri Vello, Cheng Yau Tan, Siew Moi Phang
Abstract:
It is estimated that about 0.65-0.67 m3 of palm oil mill effluent (POME) is generated when one tonne of fresh fruit bunches is processed. Owning to the high content of nutrients in POME, it has high potential as a medium for microalgae growth. This study attempted determining the growth rate, biomass productivity and biochemical composition of microalgae (Chlorella sp.) grown in different POME concentrations i.e. 6.25%, 12.5%, 25% and 50% at outdoor conditions using a 200-mL capacity high rate algae pond (HRAP) and 2 closed photobioreactors (PBRs) i.e. annular and flat panel. The strain, Chlorella sp. grown on 12.5% of POME in flat panel PBR exhibited the highest specific growth rate of 0.32/day and biomass productivity (27.1 mg/L/day) followed by those in HRAP and annular PBR. It further showed that a good growth of Chlorella sp. in 12.5% of POME could sufficiently reduce the nutrients of POME such as phosphate (PO4), nitrate (NO3), nitrite (NO2) and chemical oxygen demand (COD). The extracted algal oil from POME culture showed that the saturated fatty acids decreased while polyunsaturated fatty acids increased compared to those cultured in standard culture medium (Bold’s Basal medium). The biochemical compositions of the algae grown in flat panel PBR were the highest with lipid, protein and carbohydrate productivity of 17.91 mg/L/day, 34.65 mg/L/day and 21.44 mg/L/day, respectively. The microalgae cultivation in diluted POME had not only shown potential as biodiesel feedstock based on the fatty acids profile but also the ability to reduce pollutants e.g. PO4, NO3, NO2 and COD in biological wastewater treatment.Keywords: wastewater treatment, photobioreactors, biomass productivity, specific growth rate
Procedia PDF Downloads 26726941 Effective Internal Control System in the Nasarawa State Tertiary Educational Institutions for Efficiency- A Case of Nasarawa State Polytechnic Lafia
Authors: Dauda Ibrahim Adagye
Abstract:
Effective internal control system in the bursary unit of tertiary educational institutions is geared toward achieving quality teaching, learning, and research environment and as well assist the management of the institutions, particularly when decisions are to be made. While internal control system exists in all institutions, the outlined objectives above are far from being achieved. The paper; therefore, assesses the effectiveness of internal control system in tertiary educational institutions in Nasarawa State, Nigeria with the specific focus on the Nasarawa state Polytechnic, Lafia. The study is survey; hence, a simple closed-ended questionnaire was developed and administered to a sample of twenty-seven (27) member staff from the Bursary and the internal audit unit of the Nasarawa State Polytechnic, Lafia to obtain data for analysis purposes and to test the study hypothesis. Responses from the questionnaire were analyzed using a simple percentage and chi-square. Findings shows that the right people are not assigned to the right job in the department, budget, and management accounting were never used in the institution’s operations and checking of subordinate by their superior officers is not regular. This renders the current internal control structure of the Polytechnic as ineffective and weak. The paper therefore, recommends that: transparency should be seen as significant, as the institution work toward meeting its objectives, therefore, it means that the right staff is assigned to the right job and regular checking of the subordinates by their ensued superiors.Keywords: internal control, tertiary educational intuitions, efficiency
Procedia PDF Downloads 21626940 Value Chain Based New Business Opportunity
Authors: Seonjae Lee, Sungjoo Lee
Abstract:
Excavation is necessary to remain competitive in the current business environment. The company survived the rapidly changing industry conditions by adapting new business strategy and reducing technology challenges. Traditionally, the two methods are conducted excavations for new businesses. The first method is, qualitative analysis of expert opinion, which is gathered through opportunities and secondly, new technologies are discovered through quantitative data analysis of method patents. The second method increases time and cost. Patent data is restricted for use and the purpose of discovering business opportunities. This study presents the company's characteristics (sector, size, etc.), of new business opportunities in customized form by reviewing the value chain perspective and to contributing to creating new business opportunities in the proposed model. It utilizes the trademark database of the Korean Intellectual Property Office (KIPO) and proprietary company information database of the Korea Enterprise Data (KED). This data is key to discovering new business opportunities with analysis of competitors and advanced business trademarks (Module 1) and trading analysis of competitors found in the KED (Module 2).Keywords: value chain, trademark, trading analysis, new business opportunity
Procedia PDF Downloads 38026939 Development of an Interactive and Robust Image Analysis and Diagnostic Tool in R for Early Detection of Cervical Cancer
Authors: Kumar Dron Shrivastav, Ankan Mukherjee Das, Arti Taneja, Harpreet Singh, Priya Ranjan, Rajiv Janardhanan
Abstract:
Cervical cancer is one of the most common cancer among women worldwide which can be cured if detected early. Manual pathology which is typically utilized at present has many limitations. The current gold standard for cervical cancer diagnosis is exhaustive and time-consuming because it relies heavily on the subjective knowledge of the oncopathologists which leads to mis-diagnosis and missed diagnosis resulting false negative and false positive. To reduce time and complexities associated with early diagnosis, we require an interactive diagnostic tool for early detection particularly in developing countries where cervical cancer incidence and related mortality is high. Incorporation of digital pathology in place of manual pathology for cervical cancer screening and diagnosis can increase the precision and strongly reduce the chances of error in a time-specific manner. Thus, we propose a robust and interactive cervical cancer image analysis and diagnostic tool, which can categorically process both histopatholgical and cytopathological images to identify abnormal cells in the least amount of time and settings with minimum resources. Furthermore, incorporation of a set of specific parameters that are typically referred to for identification of abnormal cells with the help of open source software -’R’ is one of the major highlights of the tool. The software has the ability to automatically identify and quantify the morphological features, color intensity, sensitivity and other parameters digitally to differentiate abnormal from normal cells, which may improve and accelerate screening and early diagnosis, ultimately leading to timely treatment of cervical cancer.Keywords: cervical cancer, early detection, digital Pathology, screening
Procedia PDF Downloads 18026938 Towards Addressing the Cultural Snapshot Phenomenon in Cultural Mapping Libraries
Authors: Mousouris Spiridon, Kavakli Evangelia
Abstract:
This paper focuses on Digital Libraries (DLs) that contain and geovisualise cultural data, highlighting the need to define them as a separate category termed Cultural Mapping Libraries, based on their inherent connection of culture with geographic location and their design requirements in support of visual representation of cultural data on the map. An exploratory analysis of DLs that conform to the above definition brought forward the observation that existing Cultural Mapping Libraries fail to geovisualise the entirety of cultural data per point of interest thus resulting in a Cultural Snapshot phenomenon. The existence of this phenomenon was reinforced by the results of a systematic bibliographic research. In order to address the Cultural Snapshot, this paper proposes the use of the Semantic Web principles to efficiently interconnect spatial cultural data through time, per geographic location. In this way points of interest are transformed into scenery where culture evolves over time. This evolution is expressed as occurrences taking place chronologically, in an event oriented approach, a conceptualization also endorsed by the CIDOC Conceptual Reference Model (CIDOC CRM). In particular, we posit the use of CIDOC CRM as the baseline for defining the logic of Cultural Mapping Libraries as part of the Culture Domain in accordance with the Digital Library Reference Model, in order to define the rules of cultural data management by the system. Our future goal is to transform this conceptual definition in to inferencing rules that resolve the Cultural Snapshot and lead to a more complete geovisualisation of cultural data.Keywords: digital libraries, semantic web, geovisualization, CIDOC-CRM
Procedia PDF Downloads 11426937 Ubiquitous Learning Environments in Higher Education: A Scoping Literature Review
Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen
Abstract:
Ubiquitous learning and the use of ubiquitous learning environments herald a new era in higher education. Ubiquitous environments fuse together authentic learning situations and digital learning spaces where students can seamlessly immerse themselves into the learning process. Definitions of ubiquitous learning are wide and vary in the previous literature and learning environments are not systemically described. The aim of this scoping review was to identify the criteria and the use of ubiquitous learning environments in higher education contexts. The objective was to provide a clear scope and a wide view for this research area. The original studies were collected from nine electronic databases. Seven publications in total were defined as eligible and included in the final review. An inductive content analysis was used for the data analysis. The reviewed publications described the use of ubiquitous learning environments (ULE) in higher education. Components, contents and outcomes varied between studies, but there were also many similarities. In these studies, the concept of ubiquitousness was defined as context-awareness, embeddedness, content-personalization, location-based, interactivity and flexibility and these were supported by using smart devices, wireless networks and sensing technologies. Contents varied between studies and were customized to specific uses. Measured outcomes in these studies were focused on multiple aspects as learning effectiveness, cost-effectiveness, satisfaction, and usefulness. This study provides a clear scope for ULE used in higher education. It also raises the need for transparent development and publication processes, and for practical implications of ubiquitous learning environments.Keywords: higher education, learning environment, scoping review, ubiquitous learning, u-learning
Procedia PDF Downloads 27126936 Turin, from Factory City to Talents Power Player: The Role of Private Philanthropy Agents of Innovation in the Revolution of Human Capital Market in the Contemporary Socio-Urban Scenario
Authors: Renato Roda
Abstract:
With the emergence of the so-called 'Knowledge Society', the implementation of policies to attract, grow and retain talents, in an academic context as well, has become critical –both in the perspective of didactics and research and as far as administration and institutional management are concerned. At the same time, the contemporary philanthropic entities/organizations, which are evolving from traditional types of social support towards new styles of aid, envisaged to go beyond mere monetary donations, face the challenge of brand-new forms of complexity in supporting such specific dynamics of the global human capital market. In this sense, it becomes unavoidable for the philanthropic foundation, while carrying out their daily charitable tasks, to resort to innovative ways to facilitate the acquisition and the promotion of talents by academic and research institutions. In order to deepen such a specific perspective, this paper features the case of Turin, former 'factory city' of Italy’s North West, headquarters -and main reference territory- of Italy’s largest and richest private formerly bank-based philanthropic foundation, the Fondazione Compagnia di San Paolo. While it was assessed and classified as 'medium' in the city Global Talent Competitiveness Index (GTCI) of 2020, Turin has nevertheless acquired over the past months status of impact laboratory for a whole series of innovation strategies in the competition for the acquisition of excellence human capital. Leading actors of this new city vision are the foundations with their specifically adjusted financial engagement and a consistent role of stimulus towards innovation for research and education institutions.Keywords: human capital, post-Fordism, private foundation, war on talents
Procedia PDF Downloads 17526935 An Evaluation of the Impact of E-Banking on Operational Efficiency of Banks in Nigeria
Authors: Ibrahim Rabiu Darazo
Abstract:
The research has been conducted on the impact of E-banking on the operational efficiency of Banks in Nigeria, A case of some selected banks (Diamond Bank Plc, GTBankPlc, and Fidelity Bank Plc) in Nigeria. The research is a quantitative research which uses both primary and secondary sources of data collection. Questionnaire were used to obtained accurate data, where 150 Questionnaire were distributed among staff and customers of the three Banks , and the data collected where analysed using chi-square, whereas the secondary data where obtained from relevant text books, journals and relevant web sites. It is clear from the findings that, the use of e-banking by the banks has improved the efficiency of these banks, in terms of providing efficient services to customers electronically, using Internet Banking, Telephone Banking ATMs, reducing time taking to serve customers, e-banking allow new customers to open an account online, customers have access to their account at all the time 24/7.E-banking provide access to customers information from the data base and cost of check and postage were eliminated using e-banking. The recommendation at the end of the research include; the Banks should try to update their electronic gadgets, e-fraud(internal & external) should also be controlled, Banks shall employ qualified man power, Biometric ATMs shall be introduce to reduce fraud using ATM Cards, as it is use in other countries like USA.Keywords: banks, electronic banking, operational efficiency of banks, biometric ATMs
Procedia PDF Downloads 33626934 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices
Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index
Procedia PDF Downloads 35826933 Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning
Authors: Jennifer Leach, Umashanger Thayasivam
Abstract:
The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results.Keywords: data science, fraud detection, machine learning, supervised learning
Procedia PDF Downloads 20126932 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria
Authors: O. O. Aiyelokun, O. A. Agbede
Abstract:
Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.Keywords: boundary condition, goodness of fit, groundwater, satellite-based data
Procedia PDF Downloads 13226931 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic
Authors: Michael Lousis
Abstract:
The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors
Procedia PDF Downloads 31826930 Smoking Elevates the Risk of Dysbiosis Associated with Dental Decay
Authors: Razia Hossaini, Maryam Hosseini
Abstract:
Background and Objective: The impact of smoking on the shift in oral microbial composition has been questioned. This study aims to compare the oral microbiome between Turkish patients with dental caries and healthy individuals. Materials and Methods: An observational case-control study was conducted from January to June 2024, involving 270 young adults (180 with dental caries and 90 healthy controls). Participants were matched by age, gender, education, sugar consumption, and tooth brushing habits. Oral samples were collected using sterilized swabs and preserved in a PBS-glycerol solution. The cultured bacterial samples were characterized based on their morphological characteristics, Gram staining properties, hemolysis patterns, and biochemical tests including methyl red, sugar fermentation, Simmons citrate utilization, coagulase production, and catalase activity. These tests were conducted to accurately identify the bacterial species present. Subsequently, the relationship between smoking and oral health was evaluated, with a particular focus on assessing the smoking-induced changes in the composition of the oral microbiota using statistical analyses. Results: The study’s results demonstrate a clear association between smoking and an increased risk of dental caries, as well as significant shifts in the oral microbiota of smokers (p=0.04). These findings emphasize the critical need for public health initiatives that target smoking cessation as a means of improving oral health outcomes. Since smokers are 1.28 times more likely to develop dental caries than non-smokers, public health campaigns should incorporate messages that highlight the direct impact of smoking on oral health, alongside the well-established risks such as lung disease and cardiovascular conditions.The observed alterations in the oral microbiota—specifically the higher prevalence of pathogens like Escherichia coli, Pseudomonas aeruginosa, Streptococcus mutans, and Lactobacillus acidophilus in patients with dental caries—suggest that smoking not only predisposes individuals to dental decay but also creates an environment conducive to the growth of harmful bacteria. Public health interventions could therefore focus on the dual benefit of smoking cessation: reducing the incidence of dental caries and restoring a healthier oral microbiome. Additionally, the reduced presence of beneficial or less pathogenic species such as Neisseria and Micrococcus luteus in smokers implies that smoking alters the protective balance of the oral microbiome. This further underscores the importance of preventive oral health strategies tailored to smokers. Conclusion: Smoking significantly impacts oral health by promoting dysbiosis, increasing cariogenic bacteria, and reducing beneficial bacteria, which contributes to the development of dental caries. These findings highlight the need for integrated public health efforts that address both smoking cessation and oral health promotion. By raising awareness of the specific oral health risks associated with smoking, public health initiatives could help reduce the burden of dental caries and other smoking-related oral diseases, ultimately improving quality of life for individuals and reducing healthcare costs.Keywords: smoking, dysbiosis, bacteria, oral health, dental decay
Procedia PDF Downloads 23