Search results for: SURF(Speed-Up Robust Features)
4103 Stability Analysis of a Low Power Wind Turbine for the Simultaneous Generation of Energy through Two Electric Generators
Authors: Daniel Icaza, Federico Córdova, Chiristian Castro, Fernando Icaza, Juan Portoviejo
Abstract:
In this article, the mathematical model is presented, and simulations were carried out using specialized software such as MATLAB before the construction of a 900-W wind turbine. The present study was conducted with the intention of taking advantage of the rotation of the blades of the wind generator after going through a process of amplification of speed by means of a system of gears to finally mechanically couple two electric generators of similar characteristics. This coupling allows generating a maximum voltage of 6 V in DC for each generator and putting in series the 12 V DC is achieved, which is later stored in batteries and used when the user requires it. Laboratory tests were made to verify the level of power generation produced based on the wind speed at the entrance of the blades.Keywords: smart grids, wind turbine, modeling, renewable energy, robust control
Procedia PDF Downloads 2324102 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 1334101 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants
Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey
Abstract:
The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model
Procedia PDF Downloads 1434100 Natural Patterns for Sustainable Cooling in the Architecture of Residential Buildings in Iran (Hot and Dry Climate)
Authors: Elnaz Abbasian, Mohsen Faizi
Abstract:
In its thousand-year development, architecture has gained valuable patterns. Iran’s desert regions possess developed patterns of traditional architecture and outstanding skeletal features. Unfortunately increasing population and urbanization growth in the past decade as well as the lack of harmony with environment’s texture has destroyed such permanent concepts in the building’s skeleton, causing a lot of energy waste in the modern architecture. The important question is how cooling patterns of Iran’s traditional architecture can be used in a new way in the modern architecture of residential buildings? This research is library-based and documental that looks at sustainable development, analyzes the features of Iranian architecture in hot and dry climate in terms of sustainability as well as historical patterns, and makes a model for real environment. By methodological analysis of past, it intends to suggest a new pattern for residential buildings’ cooling in Iran’s hot and dry climate which is in full accordance to the ecology of the design and at the same time possesses the architectural indices of the past. In the process of cities’ physical development, ecological measures, in proportion to desert’s natural background and climate conditions, has kept the natural fences, preventing buildings from facing climate adversities. Designing and construction of buildings with this viewpoint can reduce the energy needed for maintaining and regulating environmental conditions and with the use of appropriate building technology help minimizing the consumption of fossil fuels while having permanent patterns of desert buildings’ architecture.Keywords: sustainability concepts, sustainable development, energy climate architecture, fossil fuel, hot and dry climate, patterns of traditional sustainability for residential buildings, modern pattern of cooling
Procedia PDF Downloads 3084099 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 824098 Use Cases Analysis of Free Space Optical Communication System
Authors: Kassem Saab, Fritzen Bart, Yves-Marie Seveque
Abstract:
The deployment of Free Space Optical Communications (FSOC) systems requires the development of robust and reliable Optical Ground Stations (OGS) that can be easily installed and operated. To this end, the Engineering Department of Airbus Defence and Space is actively working on the development of innovative and compact OGS solutions that can be deployed in various environments and provide high-quality connectivity under different atmospheric conditions. This article presents an overview of our recent developments in this field, including an evaluation study of different use cases of the FSOC with respect to different atmospheric conditions. The goal is to provide OGS solutions that are both simple and highly effective, allowing for the deployment of high-speed communication networks in a wide range of scenarios.Keywords: end to end optical communication, laser propagation, optical ground station, turbulence
Procedia PDF Downloads 944097 Designing of Content Management Systems (CMS) for Web Development
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
Content Management Systems (CMS) have transformed the landscape of web development by providing an accessible and efficient platform for creating and managing digital content. This abstract explores the key features and benefits of CMS in web development, highlighting its impact on website creation and maintenance. CMS offers a user-friendly interface that empowers individuals to create, edit, and publish content without requiring extensive technical knowledge. With customizable templates and themes, users can personalize the design and layout of their websites, ensuring a visually appealing online presence. Furthermore, CMS facilitates efficient content organization through categorization and tagging, enabling visitors to navigate and search for information effortlessly. It also supports version control, allowing users to track and manage revisions effectively. Scalability is a notable advantage of CMS, as it offers a wide range of plugins and extensions to integrate additional features into websites. From e-commerce functionality to social media integration, CMS adapts to evolving business needs. Additionally, CMS enhances collaborative workflows by allowing multiple user roles and permissions. This enables teams to collaborate effectively on content creation and management, streamlining processes and ensuring smooth coordination. In conclusion, CMS serves as a powerful tool in web development, simplifying content creation, customization, organization, scalability, and collaboration. With CMS, individuals and businesses can create dynamic and engaging websites, establishing a strong online presence with ease.Keywords: web development, content management systems, information technology, programming
Procedia PDF Downloads 854096 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas
Authors: Weijing Zhou, Yuting Lei, Francis Nolan
Abstract:
In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas
Procedia PDF Downloads 994095 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 674094 Neutron Irradiated Austenitic Stainless Steels: An Applied Methodology for Nanoindentation and Transmission Electron Microscopy Studies
Authors: P. Bublíkova, P. Halodova, H. K. Namburi, J. Stodolna, J. Duchon, O. Libera
Abstract:
Neutron radiation-induced microstructural changes cause degradation of mechanical properties and the lifetime reduction of reactor internals during nuclear power plant operation. Investigating the effects of neutron irradiation on mechanical properties of the irradiated material (hardening, embrittlement) is challenging and time-consuming. Although the fast neutron spectrum has the major influence on microstructural properties, the thermal neutron effect is widely investigated owing to Irradiation-Assisted Stress Corrosion Cracking firstly observed in BWR stainless steels. In this study, 300-series austenitic stainless steels used as material for NPP's internals were examined after neutron irradiation at ~ 15 dpa. Although several nanoindentation experimental publications are available to determine the mechanical properties of ion irradiated materials, less is available on neutron irradiated materials at high dpa tested in hot-cells. In this work, we present particular methodology developed to determine the mechanical properties of neutron irradiated steels by nanoindentation technique. Furthermore, radiation-induced damage in the specimens was investigated by High Resolution - Transmission Electron Microscopy (HR-TEM) that showed the defect features, particularly Frank loops, cavity microstructure, radiation-induced precipitates and radiation-induced segregation. The results of nanoindentation measurements and associated nanoscale defect features showed the effect of irradiation-induced hardening. We also propose methodologies to optimized sample preparation for nanoindentation and microscotructural studies.Keywords: nanoindentation, thermal neutrons, radiation hardening, transmission electron microscopy
Procedia PDF Downloads 1584093 The Formation of Mutual Understanding in Conversation: An Embodied Approach
Authors: Haruo Okabayashi
Abstract:
The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.Keywords: embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon
Procedia PDF Downloads 2674092 The Vision Baed Parallel Robot Control
Abstract:
In this paper, we describe the control strategy of high speed parallel robot system with EtherCAT network. This work deals the parallel robot system with centralized control on the real-time operating system such as window TwinCAT3. Most control scheme and algorithm is implemented master platform on the PC, the input and output interface is ported on the slave side. The data is transferred by maximum 20usecond with 1000byte. EtherCAT is very high speed and stable industrial network. The control strategy with EtherCAT is very useful and robust on Ethernet network environment. The developed parallel robot is controlled pre-design nonlinear controller for 6G/0.43 cycle time of pick and place motion tracking. The experiment shows the good design and validation of the controller.Keywords: parallel robot control, etherCAT, nonlinear control, parallel robot inverse kinematic
Procedia PDF Downloads 5714091 Artificial Neural Networks and Geographic Information Systems for Coastal Erosion Prediction
Authors: Angeliki Peponi, Paulo Morgado, Jorge Trindade
Abstract:
Artificial Neural Networks (ANNs) and Geographic Information Systems (GIS) are applied as a robust tool for modeling and forecasting the erosion changes in Costa Caparica, Lisbon, Portugal, for 2021. ANNs present noteworthy advantages compared with other methods used for prediction and decision making in urban coastal areas. Multilayer perceptron type of ANNs was used. Sensitivity analysis was conducted on natural and social forces and dynamic relations in the dune-beach system of the study area. Variations in network’s parameters were performed in order to select the optimum topology of the network. The developed methodology appears fitted to reality; however further steps would make it better suited.Keywords: artificial neural networks, backpropagation, coastal urban zones, erosion prediction
Procedia PDF Downloads 3924090 Robust Design of Electroosmosis Driven Self-Circulating Micromixer for Biological Applications
Authors: Bahram Talebjedi, Emily Earl, Mina Hoorfar
Abstract:
One of the issues that arises with microscale lab-on-a-chip technology is that the laminar flow within the microchannels limits the mixing of fluids. To combat this, micromixers have been introduced as a means to try and incorporate turbulence into the flow to better aid the mixing process. This study presents an electroosmotic micromixer that balances vortex generation and degeneration with the inlet flow velocity to greatly increase the mixing efficiency. A comprehensive parametric study was performed to evaluate the role of the relevant parameters on the mixing efficiency. It was observed that the suggested micromixer is perfectly suited for biological applications due to its low pressure drop (below 10 Pa) and low shear rate. The proposed micromixer with optimized working parameters is able to attain a mixing efficiency of 95% in a span of 0.5 seconds using a frequency of 10 Hz, a voltage of 0.7 V, and an inlet velocity of 0.366 mm/s.Keywords: microfluidics, active mixer, pulsed AC electroosmosis flow, micromixer
Procedia PDF Downloads 1384089 Variability of the Speaker's Verbal and Non-Verbal Behaviour in the Process of Changing Social Roles in the English Marketing Discourse
Authors: Yuliia Skrynnik
Abstract:
This research focuses on the interaction of verbal, non-verbal, and super-verbal communicative components used by the speaker changing social roles in the marketing discourse. The changing/performing of social roles is implemented through communicative strategies and tactics, the structural, semantic, and linguo-pragmatic means of which are characterized by specific features and differ for the performance of either a role of a supplier or a customer. Communication within the marketing discourse is characterized by symmetrical roles’ relation between communicative opponents. The strategy of a supplier’s social role realization and the strategy of a customer’s role realization influence the discursive personality's linguistic repertoire in the marketing discourse. This study takes into account that one person can be both a supplier and a customer under different circumstances, thus, exploring the one individual who can be both a supplier and a customer. Cooperative and non-cooperative tactics are the instruments for the implementation of these strategies. In the marketing discourse, verbal and non-verbal behaviour of the speaker performing a customer’s social role is highly informative for speakers who perform the role of a supplier. The research methods include discourse, context-situational, pragmalinguistic, pragmasemantic analyses, the method of non-verbal components analysis. The methodology of the study includes 5 steps: 1) defining the configurations of speakers’ social roles on the selected material; 2) establishing the type of the discourse (marketing discourse); 3) describing the specific features of a discursive personality as a subject of the communication in the process of social roles realization; 4) selecting the strategies and tactics which direct the interaction in different roles configurations; 5) characterizing the structural, semantic and pragmatic features of the strategies and tactics realization, including the analysis of interaction between verbal and non-verbal components of communication. In the marketing discourse, non-verbal behaviour is usually spontaneous but not purposeful. Thus, the adequate decoding of a partner’s non-verbal behavior provides more opportunities both for the supplier and the customer. Super-verbal characteristics in the marketing discourse are crucial in defining the opponent's social status and social role at the initial stage of interaction. The research provides the scenario of stereotypical situations of the play of a supplier and a customer. The performed analysis has perspectives for further research connected with the study of discursive variativity of speakers' verbal and non-verbal behaviour considering the intercultural factor influencing the process of performing the social roles in the marketing discourse; and the formation of the methods for the scenario construction of non-stereotypical situations of social roles realization/change in the marketing discourse.Keywords: discursive personality, marketing discourse, non-verbal component of communication, social role, strategy, super-verbal component of communication, tactic, verbal component of communication
Procedia PDF Downloads 1224088 Prediction of Covid-19 Cases and Current Situation of Italy and Its Different Regions Using Machine Learning Algorithm
Authors: Shafait Hussain Ali
Abstract:
Since its outbreak in China, the Covid_19 19 disease has been caused by the corona virus SARS N coyote 2. Italy was the first Western country to be severely affected, and the first country to take drastic measures to control the disease. In start of December 2019, the sudden outbreaks of the Coronary Virus Disease was caused by a new Corona 2 virus (SARS-CO2) of acute respiratory syndrome in china city Wuhan. The World Health Organization declared the epidemic a public health emergency of international concern on January 30, 2020,. On February 14, 2020, 49,053 laboratory-confirmed deaths and 1481 deaths have been reported worldwide. The threat of the disease has forced most of the governments to implement various control measures. Therefore it becomes necessary to analyze the Italian data very carefully, in particular to investigates and to find out the present condition and the number of infected persons in the form of positive cases, death, hospitalized or some other features of infected persons will clear in simple form. So used such a model that will clearly shows the real facts and figures and also understandable to every readable person which can get some real benefit after reading it. The model used must includes(total positive cases, current positive cases, hospitalized patients, death, recovered peoples frequency rates ) all features that explains and clear the wide range facts in very simple form and helpful to administration of that country.Keywords: machine learning tools and techniques, rapid miner tool, Naive-Bayes algorithm, predictions
Procedia PDF Downloads 1074087 Characteristic Study on Conventional and Soliton Based Transmission System
Authors: Bhupeshwaran Mani, S. Radha, A. Jawahar, A. Sivasubramanian
Abstract:
Here, we study the characteristic feature of conventional (ON-OFF keying) and soliton based transmission system. We consider 20 Gbps transmission system implemented with Conventional Single Mode Fiber (C-SMF) to examine the role of Gaussian pulse which is the characteristic of conventional propagation and hyperbolic-secant pulse which is the characteristic of soliton propagation in it. We note the influence of these pulses with respect to different dispersion lengths and soliton period in conventional and soliton system, respectively, and evaluate the system performance in terms of quality factor. From the analysis, we could prove that the soliton pulse has more consistent performance even for long distance without dispersion compensation than the conventional system as it is robust to dispersion. For the length of transmission of 200 Km, soliton system yielded Q of 33.958 while the conventional system totally exhausted with Q=0.Keywords: dispersion length, retrun-to-zero (rz), soliton, soliton period, q-factor
Procedia PDF Downloads 3464086 Comparative Analysis between Wired and Wireless Technologies in Communications: A Review
Authors: Jafaru Ibrahim, Tonga Agadi Danladi, Haruna Sani
Abstract:
Many telecommunications industry are looking for new ways to maximize their investment in communication networks while ensuring reliable and secure information transmission. There is a variety of communications medium solutions, the two must popularly in used are wireless technology and wired options, such as copper and fiber-optic cable. Wired network has proven its potential in the olden days but nowadays wireless communication has emerged as a robust and most intellect and preferred communication technique. Each of these types of communication medium has their advantages and disadvantages according to its technological characteristics. Wired and wireless networking has different hardware requirements, ranges, mobility, reliability and benefits. The aim of the paper is to compare both the Wired and Wireless medium on the basis of various parameters such as usability, cost, efficiency, flexibility, coverage, reliability, mobility, speed, security etc.Keywords: cost, mobility, reliability, speed, security, wired, wireless
Procedia PDF Downloads 4704085 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 924084 Specific Language Impirment in Kannada: Evidence Form a Morphologically Complex Language
Authors: Shivani Tiwari, Prathibha Karanth, B. Rajashekhar
Abstract:
Impairments of syntactic morphology are often considered central in children with Specific Language Impairment (SLI). In English and related languages, deficits of tense-related grammatical morphology could serve as a clinical marker of SLI. Yet, cross-linguistic studies on SLI in the recent past suggest that the nature and severity of morphosyntactic deficits in children with SLI varies with the language being investigated. Therefore, in the present study we investigated the morphosyntactic deficits in a group of children with SLI who speak Kannada, a morphologically complex Dravidian language spoken in Indian subcontinent. A group of 15 children with SLI participated in this study. Two more groups of typical developing children (15 each) matched for language and age to children with SLI, were included as control participants. All participants were assessed for morphosyntactic comprehension and expression using standardized language test and a spontaneous speech task. Results of the study showed that children with SLI differed significantly from age-matched but not language-matched control group, on tasks of both comprehension and expression of morphosyntax. This finding is, however, in contrast with the reports of English-speaking children with SLI who are reported to be poorer than younger MLU-matched children on tasks of morphosyntax. The observed difference in impairments of morphosyntax in Kannada-speaking children with SLI from English-speaking children with SLI is explained based on the morphological richness theory. The theory predicts that children with SLI perform relatively better in morphologically rich language due to occurrence of their frequent and consistent features that mark the morphological markers. The authors, therefore, conclude that language-specific features do influence manifestation of the disorder in children with SLI.Keywords: specific language impairment, morphosyntax, Kannada, manifestation
Procedia PDF Downloads 2444083 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg
Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma
Abstract:
The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES
Procedia PDF Downloads 804082 Theoretical Investigations and Simulation of Electromagnetic Ion Cyclotron Waves in the Earth’s Magnetosphere Through Magnetospheric Multiscale Mission
Authors: A. A. Abid
Abstract:
Wave-particle interactions are considered to be the paramount in the transmission of energy in collisionless space plasmas, where electromagnetic fields confined the charged particles movement. One of the distinct features of energy transfer in collisionless plasma is wave-particle interaction which is ubiquitous in space plasmas. The three essential populations of the inner magnetosphere are cold plasmaspheric plasmas, ring-currents, and radiation belts high energy particles. The transition region amid such populations initiates wave-particle interactions among distinct plasmas and the wave mode perceived in the magnetosphere is the electromagnetic ion cyclotron (EMIC) wave. These waves can interact with numerous particle species resonantly, accompanied by plasma particle heating is still in debate. In this work we paid particular attention to how EMIC waves impact plasma species, specifically how they affect the heating of electrons and ions during storm and substorm in the Magnetosphere. Using Magnetospheric Multiscale (MMS) mission and electromagnetic hybrid simulation, this project will investigate the energy transfer mechanism (e.g., Landau interactions, bounce resonance interaction, cyclotron resonance interaction, etc.) between EMIC waves and cold-warm plasma populations. Other features such as the production of EMIC waves and the importance of cold plasma particles in EMIC wave-particle interactions will also be worth exploring. Wave particle interactions, electromagnetic hybrid simulation, electromagnetic ion cyclotron (EMIC) waves, Magnetospheric Multiscale (MMS) mission, space plasmas, inner magnetosphereKeywords: MMS, magnetosphere, wave particle interraction, non-maxwellian distribution
Procedia PDF Downloads 624081 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1294080 Investigating Cloud Forensics: Challenges, Tools, and Practical Case Studies
Authors: Noha Badkook, Maryam Alsubaie, Samaher Dawood, Enas Khairallah
Abstract:
Cloud computing has introduced transformative benefits in data storage and accessibility while posing unique forensic challenges. This paper explores cloud forensics, focusing on investigating and analyzing evidence from cloud environments to address issues such as unauthorized data access, manipulation, and breaches. The research highlights the practical use of opensource forensic tools like Autopsy and Bulk Extractor in realworld scenarios, including unauthorized data sharing via Google Drive and the misuse of personal cloud storage for sensitive information leaks. This work underscores the growing importance of robust forensic procedures and accessible tools in ensuring data security and accountability in cloud ecosystems.Keywords: cloud forensic, tools, challenge, autopsy, bulk extractor
Procedia PDF Downloads 04079 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 644078 Micro-Meso 3D FE Damage Modelling of Woven Carbon Fibre Reinforced Plastic Composite under Quasi-Static Bending
Authors: Aamir Mubashar, Ibrahim Fiaz
Abstract:
This research presents a three-dimensional finite element modelling strategy to simulate damage in a quasi-static three-point bending analysis of woven twill 2/2 type carbon fibre reinforced plastic (CFRP) composite on a micro-meso level using cohesive zone modelling technique. A meso scale finite element model comprised of a number of plies was developed in the commercial finite element code Abaqus/explicit. The interfaces between the plies were explicitly modelled using cohesive zone elements to allow for debonding by crack initiation and propagation. Load-deflection response of the CRFP within the quasi-static range was obtained and compared with the data existing in the literature. This provided validation of the model at the global scale. The outputs resulting from the global model were then used to develop a simulation model capturing the micro-meso scale material features. The sub-model consisted of a refined mesh representative volume element (RVE) modelled in texgen software, which was later embedded with cohesive elements in the finite element software environment. The results obtained from the developed strategy were successful in predicting the overall load-deflection response and the damage in global and sub-model at the flexure limit of the specimen. Detailed analysis of the effects of the micro-scale features was carried out.Keywords: woven composites, multi-scale modelling, cohesive zone, finite element model
Procedia PDF Downloads 1384077 Highly Transparent, Hydrophobic and Self-Cleaning ZnO-Durazane Based Hybrid Organic-Inorganic Coatings
Authors: Abderrahmane Hamdi, Julie Chalon, Benoit Dodin, Philippe Champagne
Abstract:
In this report, we present a simple route to realize robust, hydrophobic, and highly transparent coatings using organic polysilazane (durazane) and zinc oxide nanoparticles (ZnO). These coatings were deposited by spraying the mixture solution on glass slides. Thus, the properties of the films were characterized by scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FT-IR), UV–vis-NIR spectrophotometer, and water contact angle method. This sprayable polymer mixed with ZnO nanoparticles shows high transparency for visible light > 90%, a hydrophobic character (CA > 90°), and good mechanical and chemical stability. The coating also demonstrates excellent self-cleaning properties, which makes it a promising candidate for commercial use.Keywords: coatings, durability, hydrophobicity, organic polysilazane, self-cleaning, transparence, zinc oxide nanoparticles
Procedia PDF Downloads 1704076 Bank Concentration and Industry Structure: Evidence from China
Authors: Jingjing Ye, Cijun Fan, Yan Dong
Abstract:
The development of financial sector plays an important role in shaping industrial structure. However, evidence on the micro-level channels through which this relation manifest remains relatively sparse, particularly for developing countries. In this paper, we compile an industry-by-city dataset based on manufacturing firms and registered banks in 287 Chinese cities from 1998 to 2008. Based on a difference-in-difference approach, we find the highly concentrated banking sector decreases the competitiveness of firms in each manufacturing industry. There are two main reasons: i) bank accessibility successfully fosters firm expansion within each industry, however, only for sufficiently large enterprises; ii) state-owned enterprises are favored by the banking industry in China. The results are robust after considering alternative concentration and external finance dependence measures.Keywords: bank concentration, China, difference-in-difference, industry structure
Procedia PDF Downloads 3884075 Impact of the Fourth Industrial Revolution on Food Security in South Africa
Authors: Fiyinfoluwa Giwa, Nicholas Ngepah
Abstract:
This paper investigates the relationship between the Fourth Industrial Revolution and food security in South Africa. The Ordinary Least Square was adopted from 2012 Q1 to 2021 Q4. The study used artificial intelligence investment and the food production index as the measure for the fourth industrial revolution and food security, respectively. Findings reveal a significant and positive coefficient of 0.2887, signifying a robust statistical relationship between AI adoption and the food production index. As a policy recommendation, this paper recommends the introduction of incentives for farmers and agricultural enterprises to adopt AI technologies -and the expansion of digital connectivity and access to technology in rural areas.Keywords: Fourth Industrial Revolution, food security, artificial intelligence investment, food production index, ordinary least square
Procedia PDF Downloads 754074 Improved Imaging and Tracking Algorithm for Maneuvering Extended UAVs Using High-Resolution ISAR Radar System
Authors: Mohamed Barbary, Mohamed H. Abd El-Azeem
Abstract:
Maneuvering extended object tracking (M-EOT) using high-resolution inverse synthetic aperture radar (ISAR) observations has been gaining momentum recently. This work presents a new robust implementation of the multiple models (MM) multi-Bernoulli (MB) filter for M-EOT, where the M-EOT’s ISAR observations are characterized using a skewed (SK) non-symmetrically normal distribution. To cope with the possible abrupt change of kinematic state, extension, and observation distribution over an extended object when a target maneuvers, a multiple model technique is represented based on MB-track-before-detect (TBD) filter supported by SK-sub-random matrix model (RMM) or sub-ellipses framework. Simulation results demonstrate this remarkable impact.Keywords: maneuvering extended objects, ISAR, skewed normal distribution, sub-RMM, MM-MB-TBD filter
Procedia PDF Downloads 76