Search results for: international higher degree research students
1335 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act
Authors: Maria Jędrzejczak, Patryk Pieniążek
Abstract:
The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.Keywords: data protection law, personal data, AI law, personal data breach
Procedia PDF Downloads 651334 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures
Authors: Tomoko Fukuyama, Osamu Senbu
Abstract:
Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.Keywords: capacitance, conductance, prestressed concrete, susceptance
Procedia PDF Downloads 4131333 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 2121332 Visitor Management in the National Parks: Recreational Carrying Capacity Assessment of Çıralı Coast, Turkey
Authors: Tendü H. Göktuğ, Gönül T. İçemer, Bülent Deniz
Abstract:
National parks, which are rich in natural and cultural resources values are protected in the context of the idea to develop sustainability, are among the most important recreated areas demanding with each passing day. Increasing recreational use or unplanned use forms negatively affect the resource values and visitor satisfaction. The intent of national parks management is to protect the natural and cultural resource values and to provide the visitors with a quality of recreational experience, as well. In this context, the current studies to improve the appropriate tourism and recreation planning and visitor management, approach have focused on recreational carrying capacity analysis. The aim of this study is to analyze recreational carrying capacity of Çıralı Coast in the Bey Mountains Coastal National Park to compare the analyze results with the current usage format and to develop alternative management strategies. In the first phase of the study, the annual and daily visitations, geographic, bio-physical, and managerial characteristics of the park and the type of recreational usage and the recreational areas were analyzed. In addition to these, ecological observations were carried out in order to determine recreational-based pressures on the ecosystems. On-site questionnaires were administrated to a sample of 284 respondents in the August 2015 - 2016 to collect data concerning the demographics and visit characteristics. The second phase of the study, the coastal area separated into four different usage zones and the methodology proposed by Cifuentes (1992) was used for capacity analyses. This method supplies the calculation of physical, real and effective carrying capacities by using environmental, ecological, climatic and managerial parameters in a formulation. Expected numbers which estimated three levels of carrying capacities were compared to current numbers of national parks’ visitors. In the study, it was determined that the current recreational uses in the north of the beach were caused by ecological pressures, and the current numbers in the south of beach much more than estimated numbers of visitors. Based on these results management strategies were defined and the appropriate management tools were developed in accordance with these strategies. The authors are grateful for the financial support of this project by The Scientific and Technological Research Council of Turkey (No: 114O344)Keywords: Çıralı Coast, national parks, recreational carrying capacity, visitor management
Procedia PDF Downloads 2741331 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 821330 The Role of the Child's Previous Inventory in Verb Overgeneralization in Spanish Child Language: A Case Study
Authors: Mary Rosa Espinosa-Ochoa
Abstract:
The study of overgeneralization in inflectional morphology provides evidence for understanding how a child's mind works when applying linguistic patterns in a novel way. High-frequency inflectional forms in the input cause inappropriate use in contexts related to lower-frequency forms. Children learn verbs as lexical items and new forms develop only gradually, around their second year: most of the utterances that children produce are closely related to what they have previously produced. Spanish has a complex verbal system that inflects for person, mood, and tense. Approximately 200 verbs are irregular, and bare roots always require an inflected form, which represents a challenge for the memory. The aim of this research is to investigate i) what kinds of overgeneralization errors children make in verb production, ii) to what extent these errors are related to verb forms previously produced, and iii) whether the overgeneralized verb components are also frequent in children’s linguistic inventory. It consists of a high-density longitudinal study of a middle-class girl (1;11,24-2;02,24) from Mexico City, whose utterances were recorded almost daily for three months to compile a unique corpus in the Spanish language. Of the 358 types of inflected verbs produced by the child, 9.11% are overgeneralizations. Not only are inflected forms (verbal and pronominal clitics) overgeneralized, but also verbal roots. Each of the forms can be traced to previous utterances, and they show that the child is detecting morphological patterns. Neither verbal roots nor inflected forms are associated with high frequency patterns in her own speech. For example, the child alternates the bare roots of an irregular verb, cáye-te* and cáiga-te* (“fall down”), to express the imperative of the verb cá-e-te (fall down.IMPERATIVE-PRONOMINAL.CLITIC), although cay-ó (PAST.PERF.3SG) is the most frequent form of her previous complete inventory, and the combined frequency of caer (INF), cae (PRES.INDICATIVE.3SG), and caes (PRES.INDICATIVE.2SG) is the same as that of as caiga (PRES.SUBJ.1SG and 3SG). These results provide evidence that a) two forms of the same verb compete in the child’s memory, and b) although the child uses her own inventory to create new forms, these forms are not necessarily frequent in her memory storage, which means that her mind is more sensitive to external stimuli. Language acquisition is a developing process, given the sensitivity of the human mind to linguistic interaction with the outside world.Keywords: inflection, morphology, child language acquisition, Spanish
Procedia PDF Downloads 1011329 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 421328 Insectivorous Medicinal Plant Drosera Ecologyand its Biodiversity Conservation through Tissue Culture and Sustainable Biotechnology
Authors: Sushil Pradhan
Abstract:
Biotechnology contributes to sustainable development in several ways such as biofertilizer production, biopesticide production and management of environmental pollution, tissue culture and biodiversity conservation in vitro, in vivo and in situ, Insectivorous medicinal plant Drosera burmannii Vahl belongs to the Family-Droseraceae under Order-Caryophyllales, Dicotyledoneae, Angiospermeae which has 31 (thirty one) living genera and 194 species besides 7 (seven) extinct (fossil) genera. Locally it is known as “Patkanduri” in Odia. Its Hindi name is “Mukhajali” and its English name is “Sundew”. The earliest species of Drosera was first reported in 1753 by Carolous Linnaeus called Drosera indica L (Indian Sundew). The latest species of Drosera reported by Fleisch A, Robinson, AS, McPherson S, Heinrich V, Gironella E and Madulida D.A. (2011) is Drosera ultramafica from Malaysia. More than 50 % species of Drosera have been reported from Australia and next to Australia is South Africa. India harbours only 3 species such as D. indica L, Drosera burmannii Vahl and D. peltata L. From our Odisha only D. burmannii Vahl is being reported for the first time from the district of Subarnapur near Sonepur (Arjunpur Reserve Forest Area). Drosera plant is autotrophic but to supplement its Nitrogen (N2) requirement it adopts heterotrophic mode of nutrition (insectivorous/carnivorous) as well. The colour of plant in mostly red and about 20-30cm in height with beautiful pink or white pentamerous flowers. Plants grow luxuriantly during November to February in shady and moist places near small water bodies of running water stream. Medicinally it is a popular herb in the locality for the treatment of cold and cough in children in rainy season by the local Doctors (Kabiraj and Baidya). In the present field investigation an attempt has been made to understand the unique reproductive phase and life cycle of the plant thereby planning for its conservation and propagation through various techniques of tissue culture and biotechnology. More importantly besides morphological and anatomical studies, cytological investigation is being carried out to find out the number of chromosomes in the cell and its genomics as there is no such report as yet for Drosera burmannii Vahl. The ecological significance and biodiversity conservation of Drosera with special reference to energy, environmental and chemical engineering has been discussed in the research paper presentation.Keywords: insectivorous, medicinal, drosera, biotechnology, chromosome, genome
Procedia PDF Downloads 3831327 A Study Investigating Word Association Behaviour in People with Acquired Language and Communication Disorders
Authors: Angela Maria Fenu
Abstract:
The aim of this study was to better characterize the nature of word association responses in people with aphasia. The participants selected for the experimental group were 4 individuals with mild Broca’s aphasia. The control group consisted of 51 cognitively intact age- and gender-matched individuals. The participants were asked to perform a word association task in which they had to say the first word they thought of when hearing each cue. The cue words (n= 16) were the translation in Italian of the set of English cue words of a published study. The participants from the experimental group were administered the word association test every two weeks for a period of two months when they received speech-language therapy A combination of analytical approaches to measure the data was used. To analyse different patterns of word association responses in both groups, the nature of the relationship between the cue and the response was examined: responses were divided into five categories of association. To investigate the similarity between aphasic and non-aphasic subjects, the stereotypy of responses was examined.While certain stimulus words (nouns, adjectives) elicited responses from Broca’s aphasics that tended to resemble those made by non-aphasic subjects; others (adverbs, verbs) showed the tendency to elicit responses different from the ones given by normal subjects. This suggests that some mechanisms underlying certain types of associations are degraded in aphasics individuals, while others display little evidence of disruption. The high number of paradigmatic associations given in response to a noun or an adjective might imply that the mechanisms, largely semantic, underlying paradigmatic associations are relatively preserved in Broca’s aphasia, but it might also mean that some words are more easily processed depending on their grammatical class (nouns, adjectives). The most significant variation was noticed when the grammatical class of the cue word was an adverb. Unlike the normal individuals, the experimental subjects gave the most idiosyncratic associations, which are often produced when the attempt to give a paradigmatic response fails. In turn, the failure to retrieve paradigmatic responses when the cue is an adverb might suggest that Broca’s aphasics are more sensitive to this grammatical class.The findings from this study suggest that, from research on word associations in people with aphasia, important data can arise concerning the specific lexical retrieval impairments that characterize the different types of aphasia and the various treatments that might positively influence the kinds of word association responses affected by language disruption.Keywords: aphasia therapy, clinical linguistics, word-association behaviour, mental lexicon
Procedia PDF Downloads 881326 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 571325 Industry Symbiosis and Waste Glass Upgrading: A Feasibility Study in Liverpool Towards Circular Economy
Authors: Han-Mei Chen, Rongxin Zhou, Taige Wang
Abstract:
Glass is widely used in everyday life, from glass bottles for beverages to architectural glass for various forms of glazing. Although the mainstream of used glass is recycled in the UK, the single-use and then recycling procedure results in a lot of waste as it incorporates intact glass with smashing, re-melting, and remanufacturing. These processes bring massive energy consumption with a huge loss of high embodied energy and economic value, compared to re-use, which’s towards a ‘zero carbon’ target. As a tourism city, Liverpool has more glass bottle consumption than most less leisure-focused cities. It’s therefore vital for Liverpool to find an upgrading approach for the single-use glass bottles with low carbon output. This project aims to assess the feasibility of industrial symbiosis and upgrading the framework of glass and to investigate the ways of achieving them. It is significant to Liverpool’s future industrial strategy since it provides an opportunity to target economic recovery for post-COVID by industry symbiosis and up-grading waste management in Liverpool to respond to the climate emergency. In addition, it will influence the local government policy for glass bottle reuse and recycling in North West England and as a good practice to be further recommended to other areas of the UK. First, a critical literature review of glass waste strategies has been conducted in the UK and worldwide industrial symbiosis practices. Second, mapping, data collection, and analysis have shown the current life cycle chain and the strong links of glass reuse and upgrading potentials via site visits to 16 local waste recycling centres. The results of this research have demonstrated the understanding of the influence of key factors on the development of a circular industrial symbiosis business model for beverage glass bottles. The current waste management procedures of the glass bottle industry, its business model, supply chain, and material flow have been reviewed. The various potential opportunities for glass bottle up-valuing have been investigated towards an industrial symbiosis in Liverpool. Finally, an up-valuing business model has been developed for an industrial symbiosis framework of glass in Liverpool. For glass bottles, there are two possibilities 1) focus on upgrading processes towards re-use rather than single-use and recycling and 2) focus on ‘smart’ re-use and recycling, leading to optimised values in other sectors to create a wider industry symbiosis for a multi-level and circular economy.Keywords: glass bottles, industry symbiosis, smart re-use, waste upgrading
Procedia PDF Downloads 1071324 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data
Authors: Jaehyung An, Sungjoo Lee
Abstract:
Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.Keywords: NLP, patent analysis, SAO, semantic-analysis
Procedia PDF Downloads 2621323 The Morphogenesis of an Informal Settlement: An Examination of Street Networks through the Informal Development Stages Framework
Authors: Judith Margaret Tymon
Abstract:
As cities struggle to incorporate informal settlements into the fabric of urban areas, the focus has often been on the provision of housing. This study explores the underlying structure of street networks, with the goal of understanding the morphogenesis of informal settlements through the lens of the access network. As the stages of development progress from infill to consolidation and eventually, to a planned in-situ settlement, the access networks retain the form of the core segments; however, a majority of street patterns are adapted to a grid design to support infrastructure in the final upgraded phase. A case study is presented to examine the street network in the informal settlement of Gobabis Namibia as it progresses from its initial stages to a planned, in-situ, and permanently upgraded development. The Informal Development Stages framework of foundation, infill, and consolidation, as developed by Dr. Jota Samper, is utilized to examine the evolution of street networks. Data is gathered from historical Google Earth satellite images for the time period between 2003 and 2022. The results demonstrate that during the foundation through infill stages, incremental changes follow similar patterns, with pathways extended, lengthened, and densified as housing is created and the settlement grows. In the final stage of consolidation, the resulting street layout is transformed to support the installation of infrastructure; however, some elements of the original street patterns remain. The core pathways remain intact to accommodate the installation of infrastructure and the creation of housing plots, defining the shape of the settlement and providing the basis of the urban form. The adaptations, growth, and consolidation of the street network are critical to the eventual formation of the spatial layout of the settlement. This study will include a comparative analysis of findings with those of recent research performed by Kamalipour, Dovey, and others regarding incremental urbanism within informal settlements. Further comparisons will also include studies of street networks of well-established urban centers that have shown links between the morphogenesis of access networks and the eventual spatial layout of the city. The findings of the study can be used to guide and inform strategies for in-situ upgrading and can contribute to the sustainable development of informal settlements.Keywords: Gobabis Namibia, incremental urbanism, informal development stages, informal settlements, street networks
Procedia PDF Downloads 651322 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties
Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret
Abstract:
Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker
Procedia PDF Downloads 1211321 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit
Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.Keywords: cloning free, time machine, teleportation, two-level system
Procedia PDF Downloads 751320 Sustainable Nanoengineering of Copper Oxide: Harnessing Its Antimicrobial and Anticancer Capabilities
Authors: Yemane Tadesse Gebreslassie, Fisseha Guesh Gebremeskel
Abstract:
Nanotechnology has made remarkable advancements in recent years, revolutionizing various scientific fields, industries, and research institutions through the utilization of metal and metal oxide nanoparticles. Among these nanoparticles, copper oxide nanoparticles (CuO NPs) have garnered significant attention due to their versatile properties and wide-range applications, particularly, as effective antimicrobial and anticancer agents. CuO NPs can be synthesized using different methods, including physical, chemical, and biological approaches. However, conventional chemical and physical approaches are expensive, resource-intensive, and involve the use of hazardous chemicals, which can pose risks to human health and the environment. In contrast, biological synthesis provides a sustainable and cost-effective alternative by eliminating chemical pollutants and allowing for the production of CuO NPs of tailored sizes and shapes. This comprehensive review focused on the green synthesis of CuO NPs using various biological resources, such as plants, microorganisms, and other biological derivatives. Current knowledge and recent trends in green synthesis methods for CuO NPs are discussed, with a specific emphasis on their biomedical applications, particularly in combating cancer and microbial infections. This review highlights the significant potential of CuO NPs in addressing these diseases. By capitalizing on the advantages of biological synthesis, such as environmental safety and the ability to customize nanoparticle characteristics, CuO NPs have emerged as promising therapeutic agents for a wide range of conditions. This review presents compelling findings, demonstrating the remarkable achievements of biologically synthesized CuO NPs as therapeutic agents. Their unique properties and mechanisms enable effective combating against cancer cells and various harmful microbial infections. CuO NPs exhibit potent anticancer activity through diverse mechanisms, including induction of apoptosis, inhibition of angiogenesis, and modulation of signaling pathways. Additionally, their antimicrobial activity manifests through various mechanisms, such as disrupting microbial membranes, generating reactive oxygen species, and interfering with microbial enzymes. This review offers valuable insights into the substantial potential of biologically synthesized CuO NPs as an alternative approach for future therapeutic interventions against cancer and microbial infections.Keywords: copper oxide nanoparticles, green synthesis, nanotechnology, microbial infection
Procedia PDF Downloads 641319 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles
Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan
Abstract:
Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks
Procedia PDF Downloads 541318 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 2381317 Influence of CO₂ on the Curing of Permeable Concrete
Authors: A. M. Merino-Lechuga, A. González-Caro, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodriguez
Abstract:
Since the mid-19th century, the boom in the economy and industry has grown exponentially. This has led to an increase in pollution due to rising Greenhouse Gas (GHG) emissions and the accumulation of waste, leading to an increasingly imminent future scarcity of raw materials and natural resources. Carbon dioxide (CO₂) is one of the primary greenhouse gases, accounting for up to 55% of Greenhouse Gas (GHG) emissions. The manufacturing of construction materials generates approximately 73% of CO₂ emissions, with Portland cement production contributing to 41% of this figure. Hence, there is scientific and social alarm regarding the carbon footprint of construction materials and their influence on climate change. Carbonation of concrete is a natural process whereby CO₂ from the environment penetrates the material, primarily through pores and microcracks. Once inside, carbon dioxide reacts with calcium hydroxide (Ca(OH)2) and/or CSH, yielding calcium carbonates (CaCO3) and silica gel. Consequently, construction materials act as carbon sinks. This research investigated the effect of accelerated carbonation on the physical, mechanical, and chemical properties of two types of non-structural vibrated concrete pavers (conventional and draining) made from natural aggregates and two types of recycled aggregates from construction and demolition waste (CDW). Natural aggregates were replaced by recycled aggregates using a volumetric substitution method, and the CO₂ capture capacity was calculated. Two curing environments were utilized: a carbonation chamber with 5% CO₂ and a standard climatic chamber with atmospheric CO₂ concentration. Additionally, the effect of curing times of 1, 3, 7, 14, and 28 days on concrete properties was analyzed. Accelerated carbonation in-creased the apparent dry density, reduced water-accessible porosity, improved compressive strength, and decreased setting time to achieve greater mechanical strength. The maximum CO₂ capture ratio was achieved with the use of recycled concrete aggregate (52.52 kg/t) in the draining paver. Accelerated carbonation conditions led to a 525% increase in carbon capture compared to curing under atmospheric conditions. Accelerated carbonation of cement-based products containing recycled aggregates from construction and demolition waste is a promising technology for CO₂ capture and utilization, offering a means to mitigate the effects of climate change and promote the new paradigm of circular economy.Keywords: accelerated carbonation, CO₂ curing, CO₂ uptake and construction and demolition waste., circular economy
Procedia PDF Downloads 651316 Sustainable Concepts Applied in the Pre-Columbian Andean Architecture in Southern Ecuador
Authors: Diego Espinoza-Piedra, David Duran
Abstract:
All architectural and land use processes are framed in a cultural, social and geographical context. The present study analyzes the Andean culture before the Spanish conquest in southern Ecuador, in the province of Azuay. This area has been habited for more than 10.000 years. The Canari and the Inca cultures occupied Azuay close to the arrival of the Spanish conquers. The Inca culture was settled in the Andes Mountains. The Canari culture was established in the south of Ecuador, on the actual provinces of Azuay and Canar. In contrast with history and archeology, to the best of our knowledge, their architecture has not yet been studied in this area because of the lack of architectural structures. Consequently, the present research reviewed the land use and culture for architectonic interpretations. The two main architectural objects in these cultures were dwellings and public buildings. In the first case, housing was conceived as temporary. It had to stand as long as its inhabitants lived. Therefore, houses were built when a couple got married. The whole community started the construction through the so-called ‘minga’ or collective work. The construction materials were tree branches, reeds, agave, ground, and straw. So that when their owners aged and then died, this house was easily disarmed and overthrown. Their materials become part of the land for agriculture. Finally, this cycle was repeated indefinitely. In the second case, the buildings, which we can call public, have presented erroneous interpretations. They have been defined as temples. But according to our conclusions, they were places for temporary accommodation, storage of objects and products, and in some special cases, even astronomical observatories. These public buildings were settled along the important road system called ‘Capac-Nam’, currently declared by UNESCO as World Cultural Heritage. The buildings had different scales at regular distances. Also, they were established in special or strategic places, which constituted a system of observatories. These observatories allowed to determine the cycles or calendars (solar or lunar) necessary for the agricultural production, as well as other natural phenomena. Most of the current minimal existence of physical structures in quantity and state of conservation is at the level of foundations or pieces of walls. Therefore, this study was realized after the identification of the history and culture of the inhabitants of this Andean region.Keywords: Andean, pre-Colombian architecture, Southern Ecuador, sustainable
Procedia PDF Downloads 1271315 Winkler Springs for Embedded Beams Subjected to S-Waves
Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto
Abstract:
Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction
Procedia PDF Downloads 611314 Determination of Identification and Antibiotic Resistance Rates of Serratia marcescens and Providencia Spp. from Various Clinical Specimens by Using Both the Conventional and Automated (VITEK2) Methods
Authors: Recep Keşli, Gülşah Aşık, Cengiz Demir, Onur Türkyılmaz
Abstract:
Objective: Serratia species are identified as aerobic, motile Gram negative rods. The species Serratia marcescens (S. marcescens) causes both opportunistic and nosocomial infections. The genus Providencia is Gram-negative bacilli and includes urease-producing that is responsible for a wide range of human infections. Although most Providencia infections involve the urinary tract, they are also associated with gastroenteritis, wound infections, and bacteremia. The aim of this study was evaluate the antimicrobial resistance rates of S. marcescens and Providencia spp. strains which had been isolated from various clinical materials obtained from different patients who belongs to intensive care units (ICU) and inpatient clinics. Methods: A total of 35 S. marcescens and Providencia spp. strains isolated from various clinical samples admitted to Medical Microbiology Laboratory, ANS Research and Practice Hospital, Afyon Kocatepe University between October 2013 and September 2015 were included in the study. Identification of the bacteria was determined by conventional methods and VITEK 2 system (bio-Merieux, Marcy l’etoile, France) was used additionally. Antibacterial resistance tests were performed by using Kirby Bauer disc (Oxoid, Hampshire, England) diffusion method following the recommendations of CLSI. Results: The distribution of clinical samples were as follows: upper and lower respiratory tract samples 26, 74.2 % wound specimen 6, 17.1 % blood cultures 3, 8.5%. Of the 35 S. marcescens and Providencia spp. strains; 28, 80% were isolated from clinical samples sent from ICU. The resistance rates of S. marcescens strains against trimethoprim-sulfamethoxazole, piperacillin-tazobactam, imipenem, gentamicin, ciprofloxacin, ceftazidime, cefepime and amikacin were found to be 8.5 %, 22.8 %, 11.4 %, 2.8 %, 17.1 %, 40 %, 28.5 % and 5.7 % respectively. Resistance rates of Providencia spp. strains against trimethoprim-sulfamethoxazole, piperacillin-tazobactam, imipenem, gentamicin, ciprofloxacin, ceftazidime, cefepime and amikacin were found to be 10.2 %, 33,3 %, 18.7 %, 8.7 %, 13.2 %, 38.6 %, 26.7%, and 11.8 % respectively. Conclusion: S. marcescens is usually resistant to ampicillin, amoxicillin, amoxicillin/clavulanate, ampicillin/sulbactam, cefuroxime, cephamycins, nitrofurantoin, and colistin. The most effective antibiotic on the total of S. marcescens strains was found to be gentamicin 2.8 %, of the totally tested strains the highest resistance rate found against to ceftazidime 40 %. The lowest and highest resistance rates were found against gentamiycin and ceftazidime with the rates of 8.7 % and 38.6 % for Providencia spp.Keywords: Serratia marcescens, Providencia spp., antibiotic resistance, intensive care unit
Procedia PDF Downloads 2441313 High Strength, High Toughness Polyhydroxybutyrate-Co-Valerate Based Biocomposites
Authors: S. Z. A. Zaidi, A. Crosky
Abstract:
Biocomposites is a field that has gained much scientific attention due to the current substantial consumption of non-renewable resources and the environmentally harmful disposal methods required for traditional polymer composites. Research on natural fiber reinforced polyhydroxyalkanoates (PHAs) has gained considerable momentum over the past decade. There is little work on PHAs reinforced with unidirectional (UD) natural fibers and little work on using epoxidized natural rubber (ENR) as a toughening agent for PHA-based biocomposites. In this work, we prepared polyhydroxybutyrate-co-valerate (PHBV) biocomposites reinforced with UD 30 wt.% flax fibers and evaluated the use of ENR with 50% epoxidation (ENR50) as a toughening agent for PHBV biocomposites. Quasi-unidirectional flax/PHBV composites were prepared by hand layup, powder impregnation followed by compression molding. Toughening agents – polybutylene adiphate-co-terephthalate (PBAT) and ENR50 – were cryogenically ground into powder and mechanically mixed with main matrix PHBV to maintain the powder impregnation process. The tensile, flexural and impact properties of the biocomposites were measured and morphology of the composites examined using optical microscopy (OM) and scanning electron microscopy (SEM). The UD biocomposites showed exceptionally high mechanical properties as compared to the results obtained previously where only short fibers have been used. The improved tensile and flexural properties were attributed to the continuous nature of the fiber reinforcement and the increased proportion of fibers in the loading direction. The improved impact properties were attributed to a larger surface area for fiber-matrix debonding and for subsequent sliding and fiber pull-out mechanisms to act on, allowing more energy to be absorbed. Coating cryogenically ground ENR50 particles with PHBV powder successfully inhibits the self-healing nature of ENR-50, preventing particles from coalescing and overcoming problems in mechanical mixing, compounding and molding. Cryogenic grinding, followed by powder impregnation and subsequent compression molding is an effective route to the production of high-mechanical-property biocomposites based on renewable resources for high-obsolescence applications such as plastic casings for consumer electronics.Keywords: natural fibers, natural rubber, polyhydroxyalkanoates, unidirectional
Procedia PDF Downloads 2901312 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions
Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan
Abstract:
Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec
Procedia PDF Downloads 1761311 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 1241310 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 401309 Functionalized Spherical Aluminosilicates in Biomedically Grade Composites
Authors: Damian Stanislaw Nakonieczny, Grazyna Simha Martynkova, Marianna Hundakova, G. Kratosová, Karla Cech Barabaszova
Abstract:
The main aim of the research was to functionalize the surface of spherical aluminum silicates in the form of so-called cenospheres. Cenospheres are light ceramic particles with a density between 0.45 and 0.85 kgm-3 hat can be obtained as a result of separation from fly ash from coal combustion. However, their occurrence is limited to about 1% by weight of dry ash mainly derived from anthracite. Hence they are very rare and desirable material. Cenospheres are characterized by complete chemical inertness. Mohs hardness in range of 6 and completely smooth surface. Main idea was to prepare the surface by chemical etching, among others hydrofluoric acid (HF) and hydrogen peroxide, caro acid, silanization using (3-aminopropyl) triethoxysilane (APTES) and tetraethyl orthosilicate (TEOS) to obtain the maximum development and functionalization of the surface to improve chemical and mechanical connection with biomedically used polymers, i.e., polyacrylic methacrylate (PMMA) and polyetheretherketone (PEEK). These polymers are used medically mainly as a material for fixed and removable dental prostheses and PEEK spinal implants. The problem with their use is the decrease in mechanical properties over time and bacterial infections fungal during implantation and use of dentures. Hence, the use of a ceramic filler that will significantly improve the mechanical properties, improve the fluidity of the polymer during shape formation, and in the future, will be able to support bacteriostatic substances such as silver and zinc ions seem promising. In order to evaluate our laboratory work, several instrumental studies were performed: chemical composition and morphology with scanning electron microscopy with Energy-Dispersive X-Ray Probe (SEM/EDX), determination of characteristic functional groups of Fourier Transform Infrared Spectroscopy (FTIR), phase composition of X-ray Diffraction (XRD) and thermal analysis of Thermo Gravimetric Analysis/differentia thermal analysis (TGA/DTA), as well as assessment of isotherm of adsorption with Brunauer-Emmett-Teller (BET) surface development. The surface was evaluated for the future application of additional bacteria and static fungus layers. Based on the experimental work, it was found that orated methods can be suitable for the functionalization of the surface of cenosphere ceramics, and in the future it can be suitable as a bacteriostatic filler for biomedical polymers, i.e., PEEK or PMMA.Keywords: bioceramics, composites, functionalization, surface development
Procedia PDF Downloads 1201308 Effect of Planting Date on Quantitative and Qualitative Characteristics of Different Bread Wheat and Durum Cultivars
Authors: Mahdi Nasiri Tabrizi, A. Dadkhah, M. Khirkhah
Abstract:
In order to study the effect of planting on yield, yield components and quality traits in bread and durum wheat varieties, a field split-plot experiment based on complete randomized design with three replications was conducted in Agricultural and Natural Resources Research Center of Razavi Khorasan located in city of Mashhad during 2013-2014. Main factor were consisted of five sowing dates (first October, fifteenth December, first March, tenth March, twentieth March) and as sub-factors consisted of different bread wheat (Bahar, Pishgam, Pishtaz, Mihan, Falat and Karim) and two durum wheat (Dena and Dehdasht). According to results of analysis variance the effect of planting date was significant on all examined traits (grain yield, biological yield, harvest index, number of grain per spike, thousands kernel weight, number of spike per square meter, plant height, the number of days to heading, the number of days to maturity, during the grain filling period, percentage of wet gluten, percentage of dry gluten, gluten index, percentage of protein). By delay in planting, majority of traits significantly decreased, except quality traits (percentage of wet gluten, percentage of dry gluten and percentage of protein). Results of means comparison showed, among planting date the highest grain yield and biological yield were related to first planting date (Octobr) with mean of production of 5/6 and 1/17 tons per hectare respectively and the highest bread quality (gluten index) with mean of 85 and percentage of protein with mean of 13% to fifth planting date also the effect of genotype was significant on all traits. The highest grain yield among of studied wheat genotypes was related to Dehdasht cultivar with an average production of 4.4 tons per hectare. The highest protein percentage and bread quality (gluten index) were related to Dehdasht cultivar with 13.4% and Falat cultivar with number of 90 respectively. The interaction between cultivar and planting date was significant on all traits and different varieties had different trend for these traits. The highest grain yield was related to first planting date (October) and Falat cultivar with an average of production of 6/7 tons per hectare while in grain yield did not show a significant different with Pishtas and Mihan cultivars also the most of gluten index (bread quality index) and protein percentage was belonged to the third planting date and Karim cultivar with 7.98 and Dena cultivar with 7.14% respectively.Keywords: yield component, yield, planting date, cultivar, quality traits, wheat
Procedia PDF Downloads 4301307 The Impact of Hormone Suppressive Therapy on Quality of Life of Patients with Nodular Goiter
Authors: Emil Iskandarov, Nazrin Agayeva
Abstract:
Background: The effectiveness of hormone suppressive therapy (HST) in patients with nodular goiter (NG) is controversial. The aim of this study was to identify the impact of long-time HST on the Quality of Life (QoL) of patients with NG. Material and Methods: A retrospective analysis of 146 patients with NG showed treated with HST showed that in 38,4% of cases, HST was not effective. Nodules were increased in size and moreover, and new nodules were developed. Statistical procedure identified the predictors of resistant nodules: only one nodule in the left lobe; nodule size >17mm; calcinate within the nodule. 174 patients with NG, by whom predictors of resistant nodules were established, were informed about the results of previous research and surgery was suggested. Eighty-eight patients (the basic group) agreed with surgery and thyroidectomy was led. 86 patients (control group) ignored the suggestion and wished to receive HST. 3, 6 and 12 months after starting HST; control group patients were examined. HST was non-effective and patients, due to developing symptoms, were operated on. Patients in both groups were followed up 3, 6 and 12 months after thyroidectomy. Quality of Life was checked with the SF-36 survey form and compared between groups. The statistical analysis was performed with the non-parametric Mann–Whitney U test and with the Student t-test. P values <0.05 were considered statistically significant. Results and Discussions: QoL of patients in the basic and control groups 3 months after surgery was almost the same. However, Emotional problems severely interfered with patients in a control group with normal social activities with family, friends, and neighbors. The causes were related to the non-effective HST treatment before surgery: stress for forgetting to take drugs timely every day for a long time; blood tests for thyroid hormone level; needle biopsies of nodules for cancer screening and regular ultrasound investigations, which showed that nodules not diminished in size. Changing the treatment method after 1-year non-effective HST and delayed surgery negatively impacted patient's QoL. Social role functioning and mental health in the control group were also impaired and the difference between the results in the basic group was statistically significant (p <0.05). Conclusion: Predictors, such as only one nodule, the width of nodules more than 17mm, and the existence of calcinate within the nodule, are able to forecast the resistant nodules. HST in patients with resistant nodules is non-effective and surgery is suggested in patients with resistant nodules in the thyroid gland. Long time HST has a negative impact on the QoL patient after surgery.Keywords: thyroid gland, nodule, hormone suppressive therapy, quality of life
Procedia PDF Downloads 1291306 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete
Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan
Abstract:
Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.Keywords: characterization, furnace, pozzolanic behavior, rice husk ash
Procedia PDF Downloads 107