Search results for: intelligent techniques
4090 Influence of Colonial Architecture on South Indian Vernacular Constructions: A Case of Venkatagiri in Andhra Pradesh, India
Authors: Jahnavi Priya Alluri, Sarang Barbarwar
Abstract:
With over 6000 years of sustained civilization, India has been home to diverse social customs and various communities. The country’s culture and architecture have been profoundly impacted by the extensive variation in its geography and climatic conditions. In its history, many kingdoms have ruled in the South Indian state of Andhra Pradesh. The vernacular constructions of this region have progressed considerably in this period. The paper discusses the impact on vernacular architecture in Venkatagiri, Andhra Pradesh, post the arrival of the British. The town was a small settlement that finds its roots in the Vijaynagara Empire. The study tries to highlight the amalgamation of colonial influences on the local construction techniques and material usage. It discusses the new variation in the style of architecture through the case of Venkatagiri Palace and its precincts. The paper also discusses the traits of distinction in the influence through various social and economic groups of the old city of the same town.Keywords: vernacular architecture, colonial architecture, Venkatagiri, south Indian vernacular
Procedia PDF Downloads 2334089 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries
Procedia PDF Downloads 4474088 A Two-Dimensional Problem Micropolar Thermoelastic Medium under the Effect of Laser Irradiation and Distributed Sources
Authors: Devinder Singh, Rajneesh Kumar, Arvind Kumar
Abstract:
The present investigation deals with the deformation of micropolar generalized thermoelastic solid subjected to thermo-mechanical loading due to a thermal laser pulse. Laplace transform and Fourier transform techniques are used to solve the problem. Thermo-mechanical laser interactions are taken as distributed sources to describe the application of the approach. The closed form expressions of normal stress, tangential stress, coupled stress and temperature are obtained in the domain. Numerical inversion technique of Laplace transform and Fourier transform has been implied to obtain the resulting quantities in the physical domain after developing a computer program. The normal stress, tangential stress, coupled stress and temperature are depicted graphically to show the effect of relaxation times. Some particular cases of interest are deduced from the present investigation.Keywords: pulse laser, integral transform, thermoelastic, boundary value problem
Procedia PDF Downloads 6164087 The Role of Artificial Intelligence in Concrete Constructions
Authors: Ardalan Tofighi Soleimandarabi
Abstract:
Artificial intelligence has revolutionized the concrete construction industry and improved processes by increasing efficiency, accuracy, and sustainability. This article examines the applications of artificial intelligence in predicting the compressive strength of concrete, optimizing mixing plans, and improving structural health monitoring systems. Artificial intelligence-based models, such as artificial neural networks (ANN) and combined machine learning techniques, have shown better performance than traditional methods in predicting concrete properties. In addition, artificial intelligence systems have made it possible to improve quality control and real-time monitoring of structures, which helps in preventive maintenance and increases the life of infrastructure. Also, the use of artificial intelligence plays an effective role in sustainable construction by optimizing material consumption and reducing waste. Although the implementation of artificial intelligence is associated with challenges such as high initial costs and the need for specialized training, it will create a smarter, more sustainable, and more affordable future for concrete structures.Keywords: artificial intelligence, concrete construction, compressive strength prediction, structural health monitoring, stability
Procedia PDF Downloads 154086 Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese
Authors: Leonardo Alfredo Mendoza, Cristian Munoz, Marco Aurelio Pacheco, Manoela Kohler, Evelyn Batista, Rodrigo Moura
Abstract:
Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%.Keywords: natural processing language, sentiment analysis, bidirectional long short-term memory, BI-LSTM, gated recurrent unit, GRU
Procedia PDF Downloads 1594085 Annotation Ontology for Semantic Web Development
Authors: Hadeel Al Obaidy, Amani Al Heela
Abstract:
The main purpose of this paper is to examine the concept of semantic web and the role that ontology and semantic annotation plays in the development of semantic web services. The paper focuses on semantic web infrastructure illustrating how ontology and annotation work to provide the learning capabilities for building content semantically. To improve productivity and quality of software, the paper applies approaches, notations and techniques offered by software engineering. It proposes a conceptual model to develop semantic web services for the infrastructure of web information retrieval system of digital libraries. The developed system uses ontology and annotation to build a knowledge based system to define and link the meaning of a web content to retrieve information for users’ queries. The results are more relevant through keywords and ontology rule expansion that will be more accurate to satisfy the requested information. The level of results accuracy would be enhanced since the query semantically analyzed work with the conceptual architecture of the proposed system.Keywords: semantic web services, software engineering, semantic library, knowledge representation, ontology
Procedia PDF Downloads 1734084 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 1084083 Applications of Nanoparticles via Laser Ablation in Liquids: A Review
Authors: Fawaz M. Abdullah, Abdulrahman M. Al-Ahmari, Madiha Rafaqat
Abstract:
Laser ablation of any solid target in the liquid leads to fabricate nanoparticles (NPs) with metal or different compositions of materials such as metals, alloys, oxides, carbides, hydroxides. The fabrication of NPs in liquids based on laser ablation has grown up rapidly in the last decades compared to other techniques. Nowadays, laser ablation has been improved to prepare different types of NPs with special morphologies, microstructures, phases, and sizes, which can be applied in various fields. The paper reviews and highlights the different sizes, shapes and application field of nanoparticles that are produced by laser ablation under different liquids and materials. Also, the paper provides a case study for producing a titanium NPs produced by laser ablation submerged in distilled water. The size of NPs is an important parameter, especially for their usage and applications. The size and shape have been analyzed by SEM, (EDAX) was applied to evaluate the oxidation and elements of titanium NPs and the XRD was used to evaluate the phase composition and the peaks of both titanium and some element. SEM technique showed that the synthesized NPs size ranges were between 15-35 nm which can be applied in various field such as annihilator for cancerous cell etc.Keywords: nanoparticles, laser ablation, titanium NPs, applications
Procedia PDF Downloads 1394082 What the Future Holds for Social Media Data Analysis
Authors: P. Wlodarczak, J. Soar, M. Ally
Abstract:
The dramatic rise in the use of Social Media (SM) platforms such as Facebook and Twitter provide access to an unprecedented amount of user data. Users may post reviews on products and services they bought, write about their interests, share ideas or give their opinions and views on political issues. There is a growing interest in the analysis of SM data from organisations for detecting new trends, obtaining user opinions on their products and services or finding out about their online reputations. A recent research trend in SM analysis is making predictions based on sentiment analysis of SM. Often indicators of historic SM data are represented as time series and correlated with a variety of real world phenomena like the outcome of elections, the development of financial indicators, box office revenue and disease outbreaks. This paper examines the current state of research in the area of SM mining and predictive analysis and gives an overview of the analysis methods using opinion mining and machine learning techniques.Keywords: social media, text mining, knowledge discovery, predictive analysis, machine learning
Procedia PDF Downloads 4234081 Endometrial Ablation and Resection Versus Hysterectomy for Heavy Menstrual Bleeding: A Systematic Review and Meta-Analysis of Effectiveness and Complications
Authors: Iliana Georganta, Clare Deehan, Marysia Thomson, Miriam McDonald, Kerrie McNulty, Anna Strachan, Elizabeth Anderson, Alyaa Mostafa
Abstract:
Context: A meta-analysis of randomized controlled trials (RCTs) comparing hysterectomy versus endometrial ablation and resection in the management of heavy menstrual bleeding. Objective: To evaluate the clinical efficacy, satisfaction rates and adverse events of hysterectomy compared to more minimally invasive techniques in the treatment of HMB. Evidence Acquisition: A literature search was performed for all RCTs and quasi-RCTs comparing hysterectomy with either endometrial ablation endometrial resection of both. The search had no language restrictions and was last updated in June 2020 using MEDLINE, EMBASE, Cochrane Central Register of Clinical Trials, PubMed, Google Scholar, PsycINFO, Clinicaltrials.gov and Clinical trials. EU. In addition, a manual search of the abstract databases of the European Haemophilia Conference on women's health was performed and further studies were identified from references of acquired papers. The primary outcomes were patient-reported and objective reduction in heavy menstrual bleeding up to 2 years and after 2 years. Secondary outcomes included satisfaction rates, pain, adverse events short and long term, quality of life and sexual function, further surgery, duration of surgery and hospital stay and time to return to work and normal activities. Data were analysed using RevMan software. Evidence synthesis: 12 studies and a total of 2028 women were included (hysterectomy: n = 977 women vs endometrial ablation or resection: n = 1051 women). Hysterectomy was compared with endometrial ablation only in five studies (Lin, Dickersin, Sesti, Jain, Cooper) and endometrial resection only in five studies (Gannon, Schulpher, O’Connor, Crosignani, Zupi) and a mixture of the Ablation and Resection in two studies (Elmantwe, Pinion). Of the 1² studies, 10 reported women’s perception of bleeding symptoms as improved. Meta-analysis showed that women in the hysterectomy group were more likely to show improvement in bleeding symptoms when compared with endometrial ablation or resection up to 2-year follow-up (RR 0.75, 95% CI 0.71 to 0.79, I² = 95%). Objective outcomes of improvement in bleeding also favored hysterectomy. Patient satisfaction was higher after hysterectomy within the 2 years follow-up (RR: 0.90, 95%CI: 0.86 to 0.94, I²:58%), however, there was no significant difference between the two groups at more than 2 years follow up. Sepsis (RR: 0.03, 95% CI 0.002 to 0.56; 1 study), wound infection (RR: 0.05, 95% CI: 0.01 to 0.28, I²: 0%, 3 studies) and Urinary tract infection (UTI) (RR: 0.20, 95% CI: 0.10 to 0.42, I²: 0%, 4 studies) all favoured hysteroscopic techniques. Fluid overload (RR: 7.80, 95% CI: 2.16 to 28.16, I² :0%, 4 studies) and perforation (RR: 5.42, 95% CI: 1.25 to 23.45, I²: 0%, 4 studies) however favoured hysterectomy in the short term. Conclusions: This meta-analysis has demonstrated that endometrial ablation and endometrial resection are both viable options when compared with hysterectomy for the treatment of heavy menstrual bleeding. Hysteroscopic procedures had better outcomes in the short term with fewer adverse events including wound infection, UTI and sepsis. The hysterectomy performed better when measuring more long-term impacts such as recurrence of symptoms, overall satisfaction at two years and the need for further treatment or surgery.Keywords: menorrhagia, hysterectomy, ablation, resection
Procedia PDF Downloads 1554080 Measuring Text-Based Semantics Relatedness Using WordNet
Authors: Madiha Khan, Sidrah Ramzan, Seemab Khan, Shahzad Hassan, Kamran Saeed
Abstract:
Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information.Keywords: Graphviz representation, semantic relatedness, similarity measurement, WordNet similarity
Procedia PDF Downloads 2384079 Fabrication of Highly-Ordered Interconnected Porous Polymeric Particles and Structures
Authors: Mohammad Alroaithi
Abstract:
Porous polymeric materials have attracted a great attention due to their distinctive porous structure within a polymer matrix. They are characterised by the presence of external pores on the surface as well as inner interconnected windows. Conventional techniques to produce porous polymeric materials encounters major challenge in controlling the properties of the resultant structures including morphology, pores, cavities size, and porosity. Herein, we present a facile and versatile microfluidics technique for the fabrication of uniform porous polymeric structures with highly ordered and well-defined interconnected windows. The shapes of the porous structures can either be a microparticles or foam. Both shapes used microfluidics platform to first produce monodisperse emulsion. The uniform emulsions, were then consolidated into porous structures through UV photopolymerisation. The morphology, pores, cavities size, and porosity of the structures can be precisely manipulated by the flowrate. The proposed strategy might provide a key advantage for fabrication of uniform porous materials over many existing technologies.Keywords: polymer, porous particles, microfluidics, porous structures
Procedia PDF Downloads 1864078 Chromium Reduction Using Bacteria: Bioremediation Technologies
Authors: Baljeet Singh Saharan
Abstract:
Bioremediation is the demand of the day. Tannery and textile effluents/waste waters have lots of pollution due to presence of hexavalent Chromium. Methodologies used in the present investigations include isolation, cultivation and purification of bacterial strain. Further characterization techniques and 16S rRNA sequencing were performed. Efficient bacterial strain capable of reducing hexavalent chromium was obtained. The strain can be used for bioremediation of industrial effluents containing hexavalent Cr. A gram negative, rod shaped and yellowish pigment producing bacterial strain from tannery effluent was isolated using nutrient agar. The 16S rRNA gene sequence similarity indicated that isolate SA13A is associated with genus Luteimonas (99%). This isolate has been found to reduce 100% of hexavalent chromium Cr (VI) (100 mg L-1) 100% in 16 h. Growth conditions were optimized for Cr (VI) reduction. Maximum reduction was observed at a temperature of 37 °C and pH 8.0. Additionally, Luteimonas aestuarii SA13A showed resistance against various heavy metals like Cr+6, Cr+3, Cu+2, Zn+2, Co+2, Ni+2 and Cd+2 . Hence, Luteimonas aestuarii SA13A could be used as potent Cr (VI) reducing strain as well as significant bioremediator in heavy metal contaminated sites.Keywords: bioremediation, chromium, eco-friendly, heavy metals
Procedia PDF Downloads 4654077 Ta-DAH: Task Driven Automated Hardware Design of Free-Flying Space Robots
Authors: Lucy Jackson, Celyn Walters, Steve Eckersley, Mini Rai, Simon Hadfield
Abstract:
Space robots will play an integral part in exploring the universe and beyond. A correctly designed space robot will facilitate OOA, satellite servicing and ADR. However, problems arise when trying to design such a system as it is a highly complex multidimensional problem into which there is little research. Current design techniques are slow and specific to terrestrial manipulators. This paper presents a solution to the slow speed of robotic hardware design, and generalizes the technique to free-flying space robots. It presents Ta-DAH Design, an automated design approach that utilises a multi-objective cost function in an iterative and automated pipeline. The design approach leverages prior knowledge and facilitates the faster output of optimal designs. The result is a system that can optimise the size of the base spacecraft, manipulator and some key subsystems for any given task. Presented in this work is the methodology behind Ta-DAH Design and a number optimal space robot designs.Keywords: space robots, automated design, on-orbit operations, hardware design
Procedia PDF Downloads 734076 Refined Procedures for Second Order Asymptotic Theory
Authors: Gubhinder Kundhi, Paul Rilstone
Abstract:
Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments
Procedia PDF Downloads 2774075 Knowledge and Adoption of Agricultural Biotechnology among Small-Scale Farmers in Taraba State Nigeria
Authors: A. H. Paul, L. J. Gizaki, E. P. Ejimbi
Abstract:
The study was carried out to determine the level of knowledge and adoption of agricultural biotechnology in Taraba state. Purposive and simple sampling techniques were used to select respondents. Questionnaires were administered to 90 respondents. Data were analyzed using descriptive and inferential statistics. The results showed that the majority (73.3%) of the respondents were small-scale farmers, whereas 24.4 percent were engaged in secondary occupations. The mean farm size was 1-5 ha. The majority (72.2%) had one form of formal education or another. About 84 percent (84.4%) had been farming for at least 10 years. There was a mean household size of 6-10 persons. Many (97.8%) of the respondents were knowledgeable about biotechnology, and about 70.0 percent (70.1%) reported that the biotechnology products which they had adopted were very good for animals and human consumption. The result of Pearson’s correlation (r = 0.699) was significant at the 0.01 alpha level. Therefore, the hypothesis that there is no significant relationship between knowledge and adoption of agricultural biotechnology was rejected. It was concluded that the agricultural biotechnologies that were adopted were very safe for animals, humans, and the environment. It was recommended that the government should employ more extension agents to help educate farmers about agricultural biotechnology.Keywords: agricultural, adoption, biotechnology, knowledge
Procedia PDF Downloads 1384074 An Experimental Study of the Parameters Affecting the Compression Index of Clay Soil
Authors: Rami Rami Mahmoud Bakr
Abstract:
The constant rate of strain (CRS) test is a rapid technique that effectively measures specific properties of cohesive soil, including the rate of consolidation, hydraulic conductivity, compressibility, and stress history. Its simple operation and frequent readings enable efficient definition, especially of the compression curve. However, its limitations include an inability to handle strain-rate-dependent soil behavior, initial transient conditions, and pore pressure evaluation errors. There are currently no effective techniques for interpreting CRS data. In this study, experiments were performed to evaluate the effects of different parameters on CRS results. Extensive tests were performed on two types of clay to analyze the soil behavior during strain consolidation at a constant rate. The results were used to evaluate the transient conditions and pore pressure system.Keywords: constant rate of strain (CRS), resedimented boston blue clay (RBBC), resedimented vicksburg buckshot clay (RVBC), compression index
Procedia PDF Downloads 424073 Vital Pulp Therapy: A Paradigm Shift in Treating Irreversible Pulpitis
Authors: Fadwa Chtioui
Abstract:
Vital Pulp Therapy (VPT) is nowadays challenging the deep-rooted dogma of root canal treatment, being the only therapeutic option for permanent teeth diagnosed with irreversible pulpitis or carious pulp exposure. Histologic and clinical research has shown that compromised dental pulp can be treated without the full removal or excavation of all healthy pulp, and the outcome of the partial or full pulpotomy followed by a Tricalcium-Silicate-based dressing seems to show promising results in maintaining pulp vitality and preserving affected teeth in the long term. By reviewing recent advances in the techniques of VPT and their clinical effectiveness and safety in permanent teeth with irreversible Pulpitis, this work provides a new understanding of pulp pathophysiology and defense mechanisms and will reform dental practitioners' decision-making in treating irreversible pulpits from root canal therapy to vital pulp therapy by taking advantage of the biological effects of Tricalcium Silicate materials.Keywords: irreversible pulpitis, vital pulp therapy, pulpotomy, Tricalcium Silicate
Procedia PDF Downloads 604072 User Intention Generation with Large Language Models Using Chain-of-Thought Prompting Title
Authors: Gangmin Li, Fan Yang
Abstract:
Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behaviour data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.Keywords: personalized recommendation, generative user modelling, user intention identification, large language models, chain-of-thought prompting
Procedia PDF Downloads 544071 Board Gender Diversity and Firm Sustainable Investment: An Empirical Evidence
Authors: Muhammad Atif, M. Samsul Alam
Abstract:
The purpose of this study is to investigate the effects of board room gender diversity on firm sustainable investment. We test the extent to which sustainable investment is affected by the presence of female directors on U.S. corporate boards. Using data of S&P 1500 indexed firms collected from Bloomberg covering the period 2004-2016, we estimate the baseline model to investigate the effects of board room gender diversity on firm sustainable investment. We find a positive relationship between board gender diversity and sustainable investment. We also find that boards with two or more women have a pronounced impact on sustainable investment, consistent with the critical mass theory. Female independent directors have a stronger impact on sustainable investment than female executive directors. Our findings are robust to different identification and estimation techniques. The study offers another perspective of the ongoing debate in the social responsibility literature about the accountability relationships between business and society.Keywords: sustainable investment, gender diversity, environmental proctection, social responsibility
Procedia PDF Downloads 1624070 A Multi Agent Based Protection Scheme for Smart Distribution Network in Presence of Distributed Energy Resources
Authors: M. R. Ebrahimi, B. Mahdaviani
Abstract:
Conventional electric distribution systems are radial in nature, supplied at one end through a main source. These networks generally have a simple protection system usually implemented using fuses, re-closers, and over-current relays. Recently, great attention has been paid to applying Distributed energy resources (DERs) throughout electric distribution systems. Presence of such generation in a network leads to losing coordination of protection devices. Therefore, it is desired to develop an algorithm which is capable of protecting distribution systems that include DER. On the other hand smart grid brings opportunities to the power system. Fast advancement in communication and measurement techniques accelerates the development of multi agent system (MAS). So in this paper, a new approach for the protection of distribution networks in the presence of DERs is presented base on MAS. The proposed scheme has been implemented on a sample 27-bus distribution network.Keywords: distributed energy resource, distribution network, protection, smart grid, multi agent system
Procedia PDF Downloads 6084069 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 884068 A Review of Feature Selection Methods Implemented in Neural Stem Cells
Authors: Natasha Petrovska, Mirjana Pavlovic, Maria M. Larrondo-Petrie
Abstract:
Neural stem cells (NSCs) are multi-potent, self-renewing cells that generate new neurons. Three subtypes of NSCs can be separated regarding the stages of NSC lineage: quiescent neural stem cells (qNSCs), activated neural stem cells (aNSCs) and neural progenitor cells (NPCs), but their gene expression signatures are not utterly understood yet. Single-cell examinations have started to elucidate the complex structure of NSC populations. Nevertheless, there is a lack of thorough molecular interpretation of the NSC lineage heterogeneity and an increasing need for tools to analyze and improve the efficiency and correctness of single-cell sequencing data. Feature selection and ordering can identify and classify the gene expression signatures of these subtypes and can discover novel subpopulations during the NSCs activation and differentiation processes. The aim here is to review the implementation of the feature selection technique on NSC subtypes and the classification techniques that have been used for the identification of gene expression signatures.Keywords: feature selection, feature similarity, neural stem cells, genes, feature selection methods
Procedia PDF Downloads 1524067 Application of Transform Fourier for Dynamic Control of Structures with Global Positioning System
Authors: J. M. de Luis Ruiz, P. M. Sierra García, R. P. García, R. P. Álvarez, F. P. García, E. C. López
Abstract:
Given the evolution of viaducts, structural health monitoring requires more complex techniques to define their state. two alternatives can be distinguished: experimental and operational modal analysis. Although accelerometers or Global Positioning System (GPS) have been applied for the monitoring of structures under exploitation, the dynamic monitoring during the stage of construction is not common. This research analyzes whether GPS data can be applied to certain dynamic geometric controls of evolving structures. The fundamentals of this work were applied to the New Bridge of Cádiz (Spain), a worldwide milestone in bridge building. GPS data were recorded with an interval of 1 second during the erection of segments and turned to the frequency domain with Fourier transform. The vibration period and amplitude were contrasted with those provided by the finite element model, with differences of less than 10%, which is admissible. This process provides a vibration record of the structure with GPS, avoiding specific equipment.Keywords: Fourier transform, global position system, operational modal analysis, structural health monitoring
Procedia PDF Downloads 2464066 The Temporal Dimension of Narratives: A Construct of Qualitative Time
Authors: Ani Thomas
Abstract:
Every narrative is a temporal construct. Every narrative creates a qualitative experience of time for the viewer. The paper argues for the concept of a qualified time that emerges from the interaction between the narrative and the audience. The paper also challenges the conventional understanding of narrative time as either story time, real time or discourse time. Looking at narratives through the medium of Cinema, the study examines how narratives create and manipulate duration or durée, the qualitative experience of time as theorized by Henri Bergson. The paper further analyzes how Cinema and, by extension, narratives are nothing but Durée and the filmmaker, the artist of durée, who shape and manipulate the perception and emotions of the viewer through the manipulation and construction of durée. The paper draws on cinematic works to look at the techniques to demonstrate how filmmakers use, for example, editing, sound, compositional and production narratives etc., to create various modes of durée that challenge, amplify or unsettle the viewer’s sense of time. Bringing together the Viewer’s durée and exploring its interaction with the narrative construct, the paper explores the emergence of the new qualitative time, the narrative durée, that defines the audience experience.Keywords: cinema, time, bergson, duree
Procedia PDF Downloads 1484065 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2194064 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions
Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude
Abstract:
Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata
Procedia PDF Downloads 1894063 A Soft Switching PWM DC-DC Boost Converter with Increased Efficiency by Using ZVT-ZCT Techniques
Authors: Yakup Sahin, Naim Suleyman Ting, Ismail Aksoy
Abstract:
In this paper, an improved active snubber cell is proposed on account of soft switching (SS) family of pulse width modulation (PWM) DC-DC converters. The improved snubber cell provides zero-voltage transition (ZVT) turn on and zero-current transition (ZCT) turn off for main switch. The snubber cell decreases EMI noise and operates with SS in a wide range of line and load voltages. Besides, all of the semiconductor devices in the converter operate with SS. There is no additional voltage and current stress on the main devices. Additionally, extra voltage stress does not occur on the auxiliary switch and its current stress is acceptable value. The improved converter has a low cost and simple structure. The theoretical analysis of converter is clarified and the operating states are given in detail. The experimental results of converter are obtained by prototype of 500 W and 100 kHz. It is observed that the experimental results and theoretical analysis of converter are suitable with each other perfectly.Keywords: active snubber cells, DC-DC converters, zero-voltage transition, zero-current transition
Procedia PDF Downloads 10204062 Robust Fuzzy PID Stabilizer: Modified Shuffled Frog Leaping Algorithm
Authors: Oveis Abedinia, Noradin Ghadimi, Nasser Mikaeilvand, Roza Poursoleiman, Asghar Poorfaraj
Abstract:
In this paper a robust Fuzzy Proportional Integral Differential (PID) controller is applied to multi-machine power system based on Modified Shuffled Frog Leaping (MSFL) algorithm. This newly proposed controller is more efficient because it copes with oscillations and different operating points. In this strategy the gains of the PID controller is optimized using the proposed technique. The nonlinear problem is formulated as an optimization problem for wide ranges of operating conditions using the MSFL algorithm. The simulation results demonstrate the effectiveness, good robustness and validity of the proposed method through some performance indices such as ITAE and FD under wide ranges operating conditions in comparison with TS and GSA techniques. The single-machine infinite bus system and New England 10-unit 39-bus standard power system are employed to illustrate the performance of the proposed method.Keywords: fuzzy PID, MSFL, multi-machine, low frequency oscillation
Procedia PDF Downloads 4304061 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 89