Search results for: incidental information processing
11714 The Relationship of Building Information Modeling (BIM) Capability in Quantity Surveying Practice and Project Performance
Authors: P. F. Wong, H. Salleh, F. A. Rahim
Abstract:
The adoption of building information modeling (BIM) is increasing in the construction industry. However, quantity surveyors are slow in adoption compared to other professions due to lack of awareness of the BIM’s potential in their profession. It is still unclear on how BIM application can enhance quantity surveyors’ work performance and project performance. The aim of this research is to identify the capabilities of BIM in quantity surveying practices and examine the relationship between BIM capabilities and project performance. Questionnaire survey and interviews were adopted for data collection. Literature reviews identified there are eleven BIM capabilities in quantity surveying practice. Questionnaire results showed that there are several BIM capabilities significantly correlated with project performance in time, cost and quality aspects and the results were validated through interviews. These findings show that BIM has the capabilities to enhance quantity surveyors’ performances and subsequently improved project performance.Keywords: Building Information Modeling (BIM), quantity surveyors, capability, project performance
Procedia PDF Downloads 36911713 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran
Authors: Hamed Zolfaghari, Mojtaba Kord
Abstract:
Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that it is not linked to planning software such as Microsoft Project, which lacks the database required for data storage. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, HR reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.Keywords: primavera P6, SQL, Power BI, EVM, integration management
Procedia PDF Downloads 10811712 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 14211711 Improving Access and Quality of Patient Information Resources for Orthognathic Treatment: A Quality Improvement Project
Authors: Evelyn Marie Richmond, Andrew McBride, Chris Johnston, John Marley
Abstract:
Background: Good quality patient information resources for orthognathic treatment help to reinforce information delivered during the initial consultation and help patients make informed decisions about their care. The Consultant Orthodontists and a Dental Core Trainee noted limited patient engagement with the British Orthodontic Society (BOS) 'Your Jaw Surgery' online resources and that the existing BOS patient information leaflet (PIL) could be customised and developed to meet local requirements. Aim: The quality improvement project (QIP) aimed to improve patients' understanding of orthognathic treatment by ensuring at least 90% of patients had read the new in-house patient information leaflet (PIL) and a minimum of 50% of patients had accessed the British Orthodontic Society (BOS) 'Your Jaw Surgery' online resources before attending the joint orthognathic multidisciplinary clinic by June 2023. Methods: The QIP was undertaken in the orthodontic department of the School of Dentistry, Belfast. Data was collected prospectively during a 6-month period from January 2023 to June 2023 over 3 Plan, Do, Study, Act (PDSA) cycles. Suitable patients were identified at consultant orthodontic new patient clinics. Following initial consultation for orthognathic treatment, patients were contacted to complete a patient questionnaire. Design: The change ideas were a poster with a QR code directing patients to the BOS 'Your Jaw Surgery' website in consultation areas and a new in-house PIL with a QR code directing patients to the BOS 'Your Jaw Surgery' website. Results: In PDSA cycle 1, 86.7% of patients were verbally directed to the BOS 'Your Jaw Surgery' website, and 53.3% accessed the online resources after their initial consultation. Although 100% of patients reported reading the existing PIL, only 64.3% felt it discussed the risks of orthognathic treatment in sufficient detail. By PDSA cycle 3, 100% of patients reported being directed to the BOS 'Your Jaw Surgery' website, however, only 58.3% engaged with the website. 100% of patients who read the new PIL felt that it discussed the risks of orthognathic treatment in sufficient detail. Conclusion: The slight improvement in access to the BOS 'Your Jaw Surgery' website shows that patients do not necessarily choose to access information online despite its availability. The uptake of the new PIL was greater than reported patient engagement with the BOS 'Your Jaw Surgery' website, which indicates patients still value written information despite the availability of online resources.Keywords: orthognathic surgery, patient information resources, quality improvement project, risks
Procedia PDF Downloads 6111710 The Legal Framework for Solid Waste Disposal and Management in Kwara State, Nigeria
Authors: Alabi Odunayo Mayowa, Ajayi Oluwasola Felix
Abstract:
Solid waste such as “garbage” “trash” “refuse” “slug” and “rubbish” is disposed off or is required to be disposed off in accordance with national law. The study relies on primary and secondary sources of information. The primary sources include the Constitution, statutes and subsidiary legislation. The secondary sources of information include books, journals, conference proceedings, newspapers, magazines and internet materials. The information obtained from these sources is subjected to content and contextual analysis. The study examines the Kwara State Environmental Protection Agency Law, 1992 and other laws on waste disposal and management in Kwara State, Nigeria. The study also examines the regulations and the agency i.e. the Kwara State Environmental Protection Agency created by the law with a view to determine the inadequacies in the law.Keywords: solid waste, waste disposal, waste management, domestic waste
Procedia PDF Downloads 47811709 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential
Authors: Rasheed Amao Busari, Ahmed Ibrahim
Abstract:
The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit
Procedia PDF Downloads 7911708 Enhanced Production of Endo-β-1,4-Xylanase from a Newly Isolated Thermophile Geobacillus stearothermophilus KIBGE-IB29 for Prospective Industrial Applications
Authors: Zainab Bibi, Afsheen Aman, Shah Ali Ul Qader
Abstract:
Endo-β-1,4-xylanases [EC 3.2.1.8] are one of the major groups of enzymes that are involved in degradation process of xylan and have several applications in food, textile and paper processing industries. Due to broad utility of endo-β-1,4-xylanase, researchers are focusing to increase the productivity of this hydrolase from various microbial species. Harsh industrial condition, faster reaction rate and efficient hydrolysis of xylan with low risk of contamination are critical requirements of industry that can be fulfilled by synthesizing the enzyme with efficient properties. In the current study, a newly isolated thermophile Geobacillus stearothermophilus KIBGE-IB29 was used in order to attain the maximum production of endo-1,4-β-xylanase. Bacterial culture was isolated from soil, collected around the blast furnace site of a steel processing mill, Karachi. Optimization of various nutritional and physical factors resulted the maximum synthesis of endo-1,4-β-xylanase from a thermophile. High production yield was achieved at 60°C and pH-6.0 after 24 hours of incubation period. Various nitrogen sources viz. peptone, yeast extract and meat extract improved the enzyme synthesis with 0.5%, 0.2% and 0.1% optimum concentrations. Dipotassium hydrogen phosphate (0.25%), potassium dihydrogen phosphate (0.05%), ammonium sulfate (0.05%) and calcium chloride (0.01%) were noticed as valuable salts to improve the production of enzyme. The thermophilic nature of isolate, with its broad pH stability profile and reduced fermentation time indicates its importance for effective xylan saccharification and for large scale production of endo-1,4-β-xylanase.Keywords: geobacillus, optimization, production, xylanase
Procedia PDF Downloads 30811707 Barriers and Challenges to a Healthy Lifestyle for Postpartum Women and the Possibilities in an Information Technology-Based Intervention: A Qualitative Study
Authors: Pernille K. Christiansen, Mette Maria Skjøth, Line Lorenzen, Eva Draborg, Christina Anne Vinter, Trine Kjær, Mette Juel Rothmann
Abstract:
Background and aims: Overweight and obesity are an increasing challenge on a global level. In Denmark, more than one-third of all pregnant women are overweight or obese, and many women exceed the gestational weight gain recommendations from the Institute of Medicine. Being overweight or obese, is associated with a higher risk of adverse maternal and fetal outcomes, including gestational diabetes and childhood obesity. Thus, it is important to focus on the women’s lifestyles between their pregnancies to lower the risk of gestational weight retention in the long run. The objective of this study was to explorer what barriers and challenges postpartum women experience with respect to healthy lifestyles during the postpartum period and to access whether an Information Technology based intervention might be a supportive tool to assist and motivate postpartum women to a healthy lifestyle. Materials and methods: The method is inspired by participatory design. A systematic text condensation was applied to semi-structured focus groups. Five focus group interviews were carried out with a total of 17 postpartum women and two interviews with a total of six health professionals. Participants were recruited through the municipality in Svendborg, Denmark, and at Odense University Hospital in Odense, Denmark, during a four-month period in early 2018. Results: From the women’s perspective, better assistance is needed from the health professionals to obtain or maintain a healthy lifestyle. The women need tools that inform and help them understand and prioritise their own health-related risks, and to motivate them to plan and take care of their own health. As the women use Information Technology on a daily basis, the solution could be delivered through Information Technology. Finally, there is room for engaging the partner more in the communication related to the baby and family’s lifestyle. Conclusion: Postpartum women need tools that inform and motivate a healthy lifestyle postpartum. The tools should allow access to high-quality information from health care professionals, when the information is needed, and also allow engagement from the partner. Finally, Information Technology is a potential tool for delivering tools.Keywords: information technology, lifestyle, overweight, postpartum
Procedia PDF Downloads 14711706 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work
Authors: Fawaz A. Binsarra, Halim Boussabaine
Abstract:
The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis
Procedia PDF Downloads 52811705 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data
Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim
Abstract:
Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.Keywords: activity pattern, data fusion, smart-card, XGboost
Procedia PDF Downloads 24711704 The Effect of an Abnormal Prefrontal Cortex on the Symptoms of Attention Deficit/Hyperactivity Disorder
Authors: Irene M. Arora
Abstract:
Hypothesis: Attention Deficit Hyperactivity Disorder is the result of an underdeveloped prefrontal cortex which is the primary cause for the signs and symptoms seen as defining features of ADHD. Methods: Through ‘PubMed’, ‘Wiley’ and ‘Google Scholar’ studies published between 2011-2018 were evaluated, determining if a dysfunctional prefrontal cortex caused the characteristic symptoms associated with ADHD. The search terms "prefrontal cortex", "Attention-Deficit/Hyperactivity Disorder", "cognitive control", "frontostriatal tract" among others, were used to maximize the assortment of relevant studies. Excluded papers were systematic reviews, meta-analyses and publications published before 2010 to ensure clinical relevance. Results: Nine publications were analyzed in this review, all of which were non-randomized matched control studies. Three studies found a decrease in the functional integrity of the frontostriatal tract fibers in conjunction with four studies finding impaired frontal cortex stimulation. Prefrontal dysfunction, specifically medial and orbitofrontal areas, displayed abnormal functionality of reward processing in ADHD patients when compared to their normal counterparts. A total of 807 subjects were studied in this review, yielding that a little over half (54%) presented with remission of symptoms in adulthood. Conclusion: While the prefrontal cortex shows the highest consistency of impaired activity and thinner volumes in patients with ADHD, this is a heterogenous disorder implicating its pathophysiology to the dysfunction of other neural structures as well. However, remission of ADHD symptomatology in adulthood was found to be attributable to increased prefrontal functional connectivity and integration, suggesting a key role for the prefrontal cortex in the development of ADHD.Keywords: prefrontal cortex, ADHD, inattentive, impulsivity, reward processing
Procedia PDF Downloads 12011703 Two-Sided Information Dissemination in Takeovers: Disclosure and Media
Authors: Eda Orhun
Abstract:
Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success
Procedia PDF Downloads 31811702 A Context-Sensitive Algorithm for Media Similarity Search
Authors: Guang-Ho Cha
Abstract:
This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results.Keywords: context-sensitive search, image search, similarity ranking, similarity search
Procedia PDF Downloads 36511701 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 8811700 Challenges in E-Government: Conceptual Views and Solutions
Authors: Rasim Alguliev, Farhad Yusifov
Abstract:
Considering the international experience, conceptual and architectural principles of forming of electron government are researched and some suggestions were made. The assessment of monitoring of forming processes of electron government, intellectual analysis of web-resources, provision of information security, electron democracy problems were researched, conceptual approaches were suggested. By taking into consideration main principles of electron government theory, important research directions were specified.Keywords: electron government, public administration, information security, web-analytics, social networks, data mining
Procedia PDF Downloads 47411699 Automation of Embodied Energy Calculations for Buildings through Building Information Modelling
Authors: Ahmad Odeh
Abstract:
Researchers are currently more concerned about the calculations of energy at the operational stage, mainly due to its larger environmental impact, but the fact remains, embodied energies represent a substantial contributor unaccounted for in the overall energy computation method. The calculation of materials’ embodied energy during the construction stage is complicated. This is due to the various factors involved. The equipment used, fuel needed, and electricity required for each type of materials varies with location and thus the embodied energy will differ for each project. Moreover, the method used in manufacturing, transporting and putting in place will have significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at calculating embodied energies based on such variabilities. It presents a systematic approach that uses an efficient method of calculation to provide a new insight for the selection of construction materials. The model is developed in a BIM environment. The quantification of materials’ energy is determined over the three main stages of their lifecycle: manufacturing, transporting and placing. The model uses three major databases each of which contains set of the construction materials that are most commonly used in building projects. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by machinery to place the materials in their intended locations. Through geospatial data analysis, the model automatically calculates the distances between the suppliers and construction sites and then uses dataset information for energy computations. The computational sum of all the energies is automatically calculated and then the model provides designers with a list of usable equipment along with the associated embodied energies.Keywords: BIM, lifecycle energy assessment, building automation, energy conservation
Procedia PDF Downloads 18911698 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8811697 Application of Fair Value Accounting in an Emerging Market Algerian Case
Authors: Haouam Djemaa
Abstract:
This study aimed to identify the possibility for applying fair value accounting by Algerian enterprises coted in capital maket (Algiers stock exchange). To achieve the objectives of this study, we made an interview with preparers of accounting information. The results document that enterprises are aware of fair value accounting in financial reporting because of its ability to provide useful accounting, but it depends on the availability of favorable circumstances for its application and this is what is missing in the Algerian environment.Keywords: fair value, financial reporting, accounting information, valuation method
Procedia PDF Downloads 39311696 Incorporating Information Gain in Regular Expressions Based Classifiers
Authors: Rosa L. Figueroa, Christopher A. Flores, Qing Zeng-Treitler
Abstract:
A regular expression consists of sequence characters which allow describing a text path. Usually, in clinical research, regular expressions are manually created by programmers together with domain experts. Lately, there have been several efforts to investigate how to generate them automatically. This article presents a text classification algorithm based on regexes. The algorithm named REX was designed, and then, implemented as a simplified method to create regexes to classify Spanish text automatically. In order to classify ambiguous cases, such as, when multiple labels are assigned to a testing example, REX includes an information gain method Two sets of data were used to evaluate the algorithm’s effectiveness in clinical text classification tasks. The results indicate that the regular expression based classifier proposed in this work performs statically better regarding accuracy and F-measure than Support Vector Machine and Naïve Bayes for both datasets.Keywords: information gain, regular expressions, smith-waterman algorithm, text classification
Procedia PDF Downloads 32011695 For Post-traumatic Stress Disorder Counselors in China, the United States, and around the Globe, Cultural Beliefs Offer Challenges and Opportunities
Authors: Anne Giles
Abstract:
Trauma is generally defined as an experience, or multiple experiences, overwhelming a person's ability to cope. Over time, many people recover from the neurobiological, physical, and emotional effects of trauma on their own. For some people, however, troubling symptoms develop over time that can result in distress and disability. This cluster of symptoms is classified as Post-traumatic Stress Disorder (PTSD). People who meet the criteria for PTSD and other trauma-related disorder diagnoses often hold a set of understandable but unfounded beliefs about traumatic events that cause undue suffering. Becoming aware of unhelpful beliefs—termed "cognitive distortions"—and challenging them is the realm of Cognitive Behavior Therapy (CBT). A form of CBT found by researchers to be especially effective for PTSD is Cognitive Processing Therapy (CPT). Through the compassionate use of CPT, people identify, examine, challenge, and relinquish unhelpful beliefs, thereby reducing symptoms and suffering. Widely-held cultural beliefs can interfere with the progress of recovery from trauma-related disorders. Although highly revered, largely unquestioned, and often stabilizing, cultural beliefs can be founded in simplistic, dichotomous thinking, i.e., things are all right, or all wrong, all good, or all bad. The reality, however, is nuanced and complex. After studying examples of cultural beliefs from China and the United States and how these might interfere with trauma recovery, trauma counselors can help clients derive criteria for preserving helpful beliefs, discover, examine, and jettison unhelpful beliefs, reduce trauma symptoms, and live their lives more freely and fully.Keywords: cognitive processing therapy (CPT), cultural beliefs, post-traumatic stress disorder (PTSD), trauma recovery
Procedia PDF Downloads 25011694 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition
Authors: Ali Nadi, Ali Edrissi
Abstract:
Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.Keywords: disaster management, real-time demand, reinforcement learning, relief demand
Procedia PDF Downloads 31611693 Debunking Sexual Myths in Bangladesh through an Intervention on the Internet
Authors: E. Rommes, Els Toonen, Rahil Roodsaz, Suborna Camellia, Farhana Alam, Saad Khan, Jhalok Ranjon Talukder, Tanveer Hassan, Syeda Farjana Ahmed, Sabina Faiz Rashid
Abstract:
In Bangladesh, a country in which adults (both parents and teachers) find it particularly hard to speak with youth about sexuality, adolescents seem to struggle with various insecurities about their sexual feelings, thoughts, behavior and physical characteristics. On the basis of a large number of interviews and focus groups with rural and urban Bangla adolescent girls and boys of lower and middle class as part of the large-scale three-year project ‘Breaking the Shame’, we have identified ten sexual themes or ‘myths’ that youth struggle with most. These encompass amongst others beliefs and insecurities on masturbation, discharge, same-sex behavior and feelings, the effects of watching porn and gender norms. We argue that the Internet is a particularly suitable medium to ‘debunk’ those myths, as youth can consult it anonymously and privately and so avoid social shame. Moreover, amongst the myths, we have identified two kinds which may need different debunking techniques. One kind of myth concerns scientifically uncontested, generally biological related information, such as the effects of having sex with a pregnant woman, questions on the effects of a penile or vaginal discharge or questions on the effects of masturbation. The second kind of myths concerns more diverse information sources and deals with e.g. religious or culturally specific norms, such as on the meaning and existence of homosexuality or gender appropriate norms of behavior in Bangladesh. For addressing both kinds of myths, expert information including a wealth of references to information resources needs to be provided, which the Internet is very suitable for. For the second kind of myths, adolescents also need to learn how to deal with sometimes conflicting norms and information sources, and they need to develop and reflect on their own opinions as part of their identity formation. On the basis of a literature review, we thus distinguish general information needs from identity formation needs, which includes the need to be able to relate information and opinions to one’s own opinions and situation. Hence, we argue that youth not only need abstract expert information to be able to debunk sexual myths, but also the option to discuss this information with other adolescents and compare their own situation and opinions with other peers, who in that way serve as ‘warm experts’ for each other. In this paper, we will describe the outcomes of our qualitative study above. In addition, we will present our findings of an intervention by presenting youth with general, uncontested information on the Internet with additional peer discussion options to compare the debunking effects on different kinds of myths.Keywords: peer discussion, intervention, sexual myths, shame
Procedia PDF Downloads 21611692 Competition between Verb-Based Implicit Causality and Theme Structure's Influence on Anaphora Bias in Mandarin Chinese Sentences: Evidence from Corpus
Authors: Linnan Zhang
Abstract:
Linguists, as well as psychologists, have shown great interests in implicit causality in reference processing. However, most frequently-used approaches to this issue are psychological experiments (such as eye tracking or self-paced reading, etc.). This research is a corpus-based one and is assisted with statistical tool – software R. The main focus of the present study is about the competition between verb-based implicit causality and theme structure’s influence on anaphora bias in Mandarin Chinese sentences. In Accessibility Theory, it is believed that salience, which is also known as accessibility, and relevance are two important factors in reference processing. Theme structure, which is a special syntactic structure in Chinese, determines the salience of an antecedent on the syntactic level while verb-based implicit causality is a key factor to the relevance between antecedent and anaphora. Therefore, it is a study about anaphora, combining psychology with linguistics. With analysis of the sentences from corpus as well as the statistical analysis of Multinomial Logistic Regression, major findings of the present study are as follows: 1. When the sentence is stated in a ‘cause-effect’ structure, the theme structure will always be the antecedent no matter forward biased verbs or backward biased verbs co-occur; in non-theme structure, the anaphora bias will tend to be the opposite of the verb bias; 2. When the sentence is stated in a ‘effect-cause’ structure, theme structure will not always be the antecedent and the influence of verb-based implicit causality will outweigh that of theme structure; moreover, the anaphora bias will be the same with the bias of verbs. All the results indicate that implicit causality functions conditionally and the noun in theme structure will not be the high-salience antecedent under any circumstances.Keywords: accessibility theory, anaphora, theme strcture, verb-based implicit causality
Procedia PDF Downloads 19911691 The Role of Information and Communication Technology in Early Childhood Education as Perceived by Early Childhood Teachers
Authors: Rabia Khalil
Abstract:
The aim of the study is to find out the perception of early childhood education teacher‘s about the role and implementation of information communication technology in early childhood education. The main purpose of the study is to investigate the role of information and communication technology in early childhood education as perceived by early childhood education teachers. The objectives of the study were to identify the roles of ICT in today’s early years and the impacts of Information communication technology in early childhood education. This study is to find out the role of ICT at ECE level & how it will be useful for teachers to implement this technique for the development of student skills. This is a quantitative research in which a survey study was conducted. The Population of the study was the primary teachers of the public and private primary schools of Lahore. By using random sampling technique the sample consists of 300 teachers but only 260 respond from 52 primary schools of Lahore. In this research, questionnaire was developed for primary school teachers. The questionnaires were based on liker type scale which comprises of section of strongly agree to strongly disagree. Data were analyzed by using descriptive analysis. The data was arranged and then entered in computer, having the software package for social sciences (SPSS) version 15. The importance of this study is to find out the role of ICT at ECE level & how it will be useful for teachers to implement this technique for the development of student skills. Procedia PDF Downloads 34511690 Microencapsulation of Phenobarbital by Ethyl Cellulose Matrix
Authors: S. Bouameur, S. Chirani
Abstract:
The aim of this study was to evaluate the potential use of EthylCellulose in the preparation of microspheres as a Drug Delivery System for sustained release of phenobarbital. The microspheres were prepared by solvent evaporation technique using ethylcellulose as polymer matrix with a ratio 1:2, dichloromethane as solvent and Polyvinyl alcohol 1% as processing medium to solidify the microspheres. Size, shape, drug loading capacity and entrapement efficiency were studied.Keywords: phenobarbital, microspheres, ethylcellulose, polyvinylacohol
Procedia PDF Downloads 36111689 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: DBSCAN, potential function, speech signal, the UBSS model
Procedia PDF Downloads 13511688 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 9011687 Well-Being and Helping Technology for Retired Population in Finland
Authors: R. Pääkkönen, L. Korpinen
Abstract:
This study aimed to evaluate parameters influencing well-being and how to maintain well-being as long as possible after retirement. There is contradictory information on the health changes after retirement in Finland. This work is based on interviews, statistics, and literature evaluation of Finland. Most often, balance, multitasking reaction time, and adaptation of vision in dim and darks areas are worsened. Slowing is one characteristic that is difficult to measure properly. The most important is try to determine ways to manage daily activities and symptoms of disease after retirement. Medicine is advancing, problems are often also on the economic side. Information of technical aids is important. It is worth planning a retirement age.Keywords: retirement, working, aging, wellness
Procedia PDF Downloads 23811686 Challenges and Success Factors in Introducing Information Systems for Students' Online Registration
Authors: Stanley Fore, Sharon Chipeperekwa
Abstract:
The start of the 2011 academic year in South Africa saw a number of Institutions of Higher Learning introducing online registration for their students. The efficiency and effectiveness of Information Systems are increasingly becoming a necessity and not an option for many organizations. An information system should be able to allow end users to access information easily and navigate with ease. The selected University of Technology (UoT) in this research is one of the largest public institution of higher learning in the Western Cape Province and boasts of an enrolment of more than 30000 students per academic year. An observation was made that, during registration students’ stand in long queues waiting to register or for assistance to register. The system tends to ‘freeze’ whilst students are registering and students are in most cases unfamiliar with the system interface. They constantly have to enquire what to do next when going through online registration process. A mixed method approach will be adopted which comprises of quantitative and qualitative approaches. The study uses constructs of the updated DeLone and McLean IS success model (2003) to analyse and explain the student’s perceptions of the online registration system. The research was undertaken to establish the student’s perceptions of the online registration system. This research seeks to identify and analyse the challenges and success factors of introducing an online registration system whilst highlighting the extent to which this system has been able to solve the numerous problems associated with the manual era. The study will assist management and those responsible for managing the current system to determine how well the system is working or not working to achieve user satisfaction. It will also assist them going forward on what to consider before, during and after implementation of an information system. Respondents will be informed of the objectives of the research, and their consent to participate will be sought. Ethical considerations that will be applied to this study include; informed consent and protection from harm, right to privacy and involvement of the research.Keywords: online registration, information systems, University of Technology, end-users
Procedia PDF Downloads 25811685 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 145