Search results for: crow search algorithm
644 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 111643 Telemedicine Services in Ophthalmology: A Review of Studies
Authors: Nasim Hashemi, Abbas Sheikhtaheri
Abstract:
Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.Keywords: applications, telehealth, telemedicine, teleophthalmology
Procedia PDF Downloads 374642 An Energy-Balanced Clustering Method on Wireless Sensor Networks
Authors: Yu-Ting Tsai, Chiun-Chieh Hsu, Yu-Chun Chu
Abstract:
In recent years, due to the development of wireless network technology, many researchers have devoted to the study of wireless sensor networks. The applications of wireless sensor network mainly use the sensor nodes to collect the required information, and send the information back to the users. Since the sensed area is difficult to reach, there are many restrictions on the design of the sensor nodes, where the most important restriction is the limited energy of sensor nodes. Because of the limited energy, researchers proposed a number of ways to reduce energy consumption and balance the load of sensor nodes in order to increase the network lifetime. In this paper, we proposed the Energy-Balanced Clustering method with Auxiliary Members on Wireless Sensor Networks(EBCAM)based on the cluster routing. The main purpose is to balance the energy consumption on the sensed area and average the distribution of dead nodes in order to avoid excessive energy consumption because of the increasing in transmission distance. In addition, we use the residual energy and average energy consumption of the nodes within the cluster to choose the cluster heads, use the multi hop transmission method to deliver the data, and dynamically adjust the transmission radius according to the load conditions. Finally, we use the auxiliary cluster members to change the delivering path according to the residual energy of the cluster head in order to its load. Finally, we compare the proposed method with the related algorithms via simulated experiments and then analyze the results. It reveals that the proposed method outperforms other algorithms in the numbers of used rounds and the average energy consumption.Keywords: auxiliary nodes, cluster, load balance, routing algorithm, wireless sensor network
Procedia PDF Downloads 274641 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 174640 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 101639 Economic Impact and Benefits of Integrating Augmented Reality Technology in the Healthcare Industry: A Systematic Review
Authors: Brenda Thean I. Lim, Safurah Jaafar
Abstract:
Augmented reality (AR) in the healthcare industry has been gaining popularity in recent years, principally in areas of medical education, patient care and digital health solutions. One of the drivers in deciding to invest in AR technology is the potential economic benefits it could bring for patients and healthcare providers, including the pharmaceutical and medical technology sectors. Works of literature have shown that the benefits and impact of AR technologies have left trails of achievements in improving medical education and patient health outcomes. However, little has been published on the economic impact of AR in healthcare, a very resource-intensive industry. This systematic review was performed on studies focused on the benefits and impact of AR in healthcare to appraise if they meet the founded quality criteria so as to identify relevant publications for an in-depth analysis of the economic impact assessment. The literature search was conducted using multiple databases such as PubMed, Cochrane, Science Direct and Nature. Inclusion criteria include research papers on AR implementation in healthcare, from education to diagnosis and treatment. Only papers written in English language were selected. Studies on AR prototypes were excluded. Although there were many articles that have addressed the benefits of AR in the healthcare industry in the area of medical education, treatment and diagnosis and dental medicine, there were very few publications that identified the specific economic impact of technology within the healthcare industry. There were 13 publications included in the analysis based on the inclusion criteria. Out of the 13 studies, none comprised a systematically comprehensive cost impact evaluation. An outline of the cost-effectiveness and cost-benefit framework was made based on an AR article from another industry as a reference. This systematic review found that while the advancements of AR technology is growing rapidly and industries are starting to adopt them into respective sectors, the technology and its advancements in healthcare were still in their early stages. There are still plenty of room for further advancements and integration of AR into different sectors within the healthcare industry. Future studies will require more comprehensive economic analyses and costing evaluations to enable economic decisions for or against implementing AR technology in healthcare. This systematic review concluded that the current literature lacked detailed examination and conduct of economic impact and benefit analyses. Recommendations for future research would be to include details of the initial investment and operational costs for the AR infrastructure in healthcare settings while comparing the intervention to its conventional counterparts or alternatives so as to provide a comprehensive comparison on impact, benefit and cost differences.Keywords: augmented reality, benefit, economic impact, healthcare, patient care
Procedia PDF Downloads 207638 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping
Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope
Abstract:
The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing
Procedia PDF Downloads 81637 Comfort Evaluation of Summer Knitted Clothes of Tencel and Cotton Fabrics
Authors: Mona Mohamed Shawkt Ragab, Heba Mohamed Darwish
Abstract:
Context: Comfort properties of garments are crucial for the wearer, and with the increasing demand for cotton fabric, there is a need to explore alternative fabrics that can offer similar or superior comfort properties. This study focuses on comparing the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the aim of identifying fabrics that are more suitable for summer clothes. Research Aim: The aim of this study is to evaluate the comfort properties of tencel/cotton single jersey fabric and cotton single jersey fabric, with the goal of identifying fabrics that can serve as alternatives to cotton, considering their comfort properties for summer clothing. Methodology: An experimental, analytical approach was employed in this study. Two circular knitting machines were used to produce the fabrics, one with a 24 inches gauge and the other with a 28 inches gauge. Both fabrics were knitted with three different loop lengths (3.05 mm, 2.9 mm, and 2.6 mm) to obtain loose, medium, and tight fabrics for evaluation. Various comfort properties, including air permeability, water vapor permeability, wickability, and thermal resistance, were measured for both fabric types. Findings: The study found a significant difference in comfort properties between tencel/cotton single jersey fabric and cotton single jersey fabric. Tencel/cotton fabric exhibited higher air permeability, water vapor permeability, and wickability compared to cotton fabric. These findings suggest that tencel fabric is more suitable for summer clothes due to its superior ventilation and absorption properties. Theoretical Importance: This study contributes to the exploration of alternative fabrics to cotton by evaluating their comfort properties. By identifying fabrics that offer better comfort properties than cotton, particularly in terms of water usage, the study provides valuable insights into sustainable fabric choices for the fashion industry. Data Collection and Analysis Procedures: The comfort properties of the fabrics were measured using appropriate testing methods. Paired comparison t-tests were conducted to determine the significant differences between tencel/cotton fabric and cotton fabric in the measured properties. Correlation coefficients were also calculated to examine the relationships between the factors under study. Question Addressed: The study addresses the question of whether tencel/cotton single jersey fabric can serve as an alternative to cotton fabric for summer clothes, considering their comfort properties. Conclusion: The study concludes that tencel/cotton single jersey fabric offers superior comfort properties compared to cotton single jersey fabric, making it a suitable alternative for summer clothes. The findings also highlight the importance of considering fabric properties, such as air permeability, water vapor permeability, and wickability, when selecting materials for garments to enhance wearer comfort. This research contributes to the search for sustainable alternatives to cotton and provides valuable insights for the fashion industry in making informed fabric choices.Keywords: comfort properties, cotton fabric, tencel fabric, single jersey
Procedia PDF Downloads 74636 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 184635 Initializing E-Classroom in a Multigrade School in the Philippines
Authors: Karl Erickson I. Ebora
Abstract:
Science and technology are two inseparable terms which bring wonders to all aspects of life such as education, medicine, food production and even the environment. In education, technology has become an integral part as it brings many benefits to the teaching-learning process. However, in the Philippines, being one of the developing countries resources are scarce and not all schools enjoy the fruits brought by technology. Much of this ordeal impacts that of multigrade instruction. These schools are often the last priority in resources allocation since these have limited number of students. In fact, it is not surprising that these schools do not have even a single computer unit much more a computer laboratory. This paper sought to present a plan on how public schools would receive its e-classroom. Specifically, this paper sought to answer questions like the level of the school readiness in terms of facilities and equipment; the attitude of the respondents towards the use of e-classroom; level of teacher’s familiarity in using different e-classroom software and the plans of interventions undertaken by the school to make it e-classroom ready. After gathering and analysing the necessary data, this paper came up with the following conclusions that in terms of facilities and equipment, Guisguis Talon Elementary School (Main), though a multigrade school, is ready to receive e-classroom.; that the respondents show positive disposition in technology utilization in teaching after they strongly agree that technology plays essential role in the teaching-learning process. Also, they strongly agree that technology is a good motivator; it makes the teaching and learning more interesting and effective; it makes teaching easy; and that technology enhances student’s learning. Additionally, Teacher-respondents in Guisguis Talon Elementary School (Main) show familiarity in using software. They are very familiar with MS Word; MS Excel; MS PowerPoint; and internet and email. Moreover, they are very familiar with basic e-classroom computer operations and basic application software. They are very familiar with MS office and can do simple editing and formatting; in accessing and saving information from CD/DVD, external hard drives, USB and the like; and in browsing effectively different search engines and educational sites, download and upload files. Likewise respondents strongly agree to the interventions undertaken by the school to make it e-classroom ready. They strongly agree that funding and support are needed by the school; that stakeholders should be encouraged to consider donating of equipment; and that school and community should try to mobilize their resources in order to help the school; that the teachers should be provided with trainings in order for them to be technologically competent; and that principals and administrators should motivate their teachers to undergo continuous professional development.Keywords: e-classroom, multi-grade school, DCP, classroom computers
Procedia PDF Downloads 199634 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body
Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi
Abstract:
The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.Keywords: Accu-Check, diabetes, neural network, pattern recognition
Procedia PDF Downloads 146633 Role of Imaging in Predicting the Receptor Positivity Status in Lung Adenocarcinoma: A Chapter in Radiogenomics
Authors: Sonal Sethi, Mukesh Yadav, Abhimanyu Gupta
Abstract:
The upcoming field of radiogenomics has the potential to upgrade the role of imaging in lung cancer management by noninvasive characterization of tumor histology and genetic microenvironment. Receptor positivity like epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) genotyping are critical in lung adenocarcinoma for treatment. As conventional identification of receptor positivity is an invasive procedure, we analyzed the features on non-invasive computed tomography (CT), which predicts the receptor positivity in lung adenocarcinoma. Retrospectively, we did a comprehensive study from 77 proven lung adenocarcinoma patients with CT images, EGFR and ALK receptor genotyping, and clinical information. Total 22/77 patients were receptor-positive (15 had only EGFR mutation, 6 had ALK mutation, and 1 had both EGFR and ALK mutation). Various morphological characteristics and metastatic distribution on CT were analyzed along with the clinical information. Univariate and multivariable logistic regression analyses were used. On multivariable logistic regression analysis, we found spiculated margin, lymphangitic spread, air bronchogram, pleural effusion, and distant metastasis had a significant predictive value for receptor mutation status. On univariate analysis, air bronchogram and pleural effusion had significant individual predictive value. Conclusions: Receptor positive lung cancer has characteristic imaging features compared with nonreceptor positive lung adenocarcinoma. Since CT is routinely used in lung cancer diagnosis, we can predict the receptor positivity by a noninvasive technique and would follow a more aggressive algorithm for evaluation of distant metastases as well as for the treatment.Keywords: lung cancer, multidisciplinary cancer care, oncologic imaging, radiobiology
Procedia PDF Downloads 136632 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 498631 Development of National Scale Hydropower Resource Assessment Scheme Using SWAT and Geospatial Techniques
Authors: Rowane May A. Fesalbon, Greyland C. Agno, Jodel L. Cuasay, Dindo A. Malonzo, Ma. Rosario Concepcion O. Ang
Abstract:
The Department of Energy of the Republic of the Philippines estimates that the country’s energy reserves for 2015 are dwindling– observed in the rotating power outages in several localities. To aid in the energy crisis, a national hydropower resource assessment scheme is developed. Hydropower is a resource that is derived from flowing water and difference in elevation. It is a renewable energy resource that is deemed abundant in the Philippines – being an archipelagic country that is rich in bodies of water and water resources. The objectives of this study is to develop a methodology for a national hydropower resource assessment using hydrologic modeling and geospatial techniques in order to generate resource maps for future reference and use of the government and other stakeholders. The methodology developed for this purpose is focused on two models – the implementation of the Soil and Water Assessment Tool (SWAT) for the river discharge and the use of geospatial techniques to analyze the topography and obtain the head, and generate the theoretical hydropower potential sites. The methodology is highly coupled with Geographic Information Systems to maximize the use of geodatabases and the spatial significance of the determined sites. The hydrologic model used in this workflow is SWAT integrated in the GIS software ArcGIS. The head is determined by a developed algorithm that utilizes a Synthetic Aperture Radar (SAR)-derived digital elevation model (DEM) which has a resolution of 10-meters. The initial results of the developed workflow indicate hydropower potential in the river reaches ranging from pico (less than 5 kW) to mini (1-3 MW) theoretical potential.Keywords: ArcSWAT, renewable energy, hydrologic model, hydropower, GIS
Procedia PDF Downloads 313630 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 54629 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 371628 Strategic Interventions to Combat Socio-economic Impacts of Drought in Thar - A Case Study of Nagarparkar
Authors: Anila Hayat
Abstract:
Pakistan is one of those developing countries that are least involved in emissions but has the most vulnerable environmental conditions. Pakistan is ranked 8th in most affected countries by climate change on the climate risk index 1992-2011. Pakistan is facing severe water shortages and flooding as a result of changes in rainfall patterns, specifically in the least developed areas such as Tharparkar. Nagarparkar, once an attractive tourist spot located in Tharparkar because of its tropical desert climate, is now facing severe drought conditions for the last few decades. This study investigates the present socio-economic situation of local communities, major impacts of droughts and their underlying causes and current mitigation strategies adopted by local communities. The study uses both secondary (quantitative in nature) and primary (qualitative in nature) methods to understand the impacts and explore causes on the socio-economic life of local communities of the study area. The relevant data has been collected through household surveys using structured questionnaires, focus groups and in-depth interviews of key personnel from local and international NGOs to explore the sensitivity of impacts and adaptation to droughts in the study area. This investigation is limited to four rural communities of union council Pilu of Nagarparkar district, including Bheel, BhojaBhoon, Mohd Rahan Ji Dhani and Yaqub Ji Dhani villages. The results indicate that drought has caused significant economic and social hardships for the local communities as more than 60% of the overall population is dependent on rainfall which has been disturbed by irregular rainfall patterns. The decline in Crop yields has forced the local community to migrate to nearby areas in search of livelihood opportunities. Communities have not undertaken any appropriate adaptive actions to counteract the adverse effect of drought; they are completely dependent on support from the government and external aid for survival. Respondents also reported that poverty is a major cause of their vulnerability to drought. An increase in population, limited livelihood opportunities, caste system, lack of interest from the government sector, unawareness shaped their vulnerability to drought and other social issues. Based on the findings of this study, it is recommended that the local authorities shall create awareness about drought hazards and improve the resilience of communities against drought. It is further suggested to develop, introduce and implement water harvesting practices at the community level to promote drought-resistant crops.Keywords: migration, vulnerability, awareness, Drought
Procedia PDF Downloads 132627 Portrayal of Kolkata(the former capital of India) in the ‘Kolkata Trilogy’- A Comparative Study of the Films by Mrinal Sen and Satyajit Ray
Authors: Ronit Chakraborty
Abstract:
Kolkata, formerly known as Calcutta, is the capital of West Bengal state and the former capital of India (1722-1911) of British India. Located at the heart of Hugli river (one of the main channels of Ganges river), the city is the heart of the state, which forms a base for commerce, transport and manufacture. The large and vibrant city thrives amidst the economic, social and political issues arising from the pages of history to the contemporary times. The unique nature, grandeurs, public debates on tea-stalls and obviously the charismatic scenic beauty and heritage keep the city to be criticized in all horizons, across the world. Movies in India are a big source of knowledge, which can be used as a powerful tool for political mobilization and to indirectly communicate with voters since cinema can be used as a tool of propaganda as it has a wide range of public interests. History proves the fact that films produced in India have been apt enough in making public interests be deeply portrayed through their content in a versatile manner. Such is the portrayal of India’s first capital, Kolkata and its ultimate truth being organizingly laid over by the trilogy of two international fame directors-Mrinal Sen and Satyajit Ray, through their ‘magnum opus- the ‘Kolkata trilogy’. Mrinal Sen’s Interview(1971), Calcutta 71(1972), Padatik(The Guerilla Fighter)(1973) and Satyajit Ray’s Pratidwandi (The Adversary)(1970), Seemabaddha(Company Limited)(1971), Jana Aranya(1976). These films picturized the contemporary Kolkata trends, issues and crises arising amidst the political set-up both by the positive and negative variables attributing to the day-to-day happenings of the city. The movies have been set amidst the turmoil that the nation was going through during Indira Gandhi’s declaration of Emergency, resulting from the general sense of disillusionment that prevailed during that time. Ray wasn't affiliated to any political party and his films largely contributed towards the contemporary conditions prevailing in the society. Mrinal Sen, being a Marxist was in constant search of the bitter truth that the society had to offer through his lens under the prevailing darkness through his trilogy. The research paper attempts to widely view and draw a comparative study of the overall description of the city of Kolkata as portrayed by Sen and Ray in their respective trilogies. By the usage of the visual content analysis method, the researcher has explored the six movies; both the trilogies of Mrinal Sen and Satyajit Ray and tried to analyse the differences as well as the similarities pertaining to understand India’s first capital city Kolkata in various dimensions along with its circumference.Keywords: Kolkata, trilogy, Satyajit Ray, Mrinal Sen, films, comparative study
Procedia PDF Downloads 257626 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh
Authors: A. A. Sadia, A. Emdad, E. Hossain
Abstract:
The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application
Procedia PDF Downloads 68625 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 445624 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Authors: F. M. Pisano, M. Ciminello
Abstract:
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics
Procedia PDF Downloads 124623 The Antimicrobial Activity of Marjoram Essential Oil Against Some Antibiotic Resistant Microbes Isolated from Hospitals
Authors: R. A. Abdel Rahman, A. E. Abdel Wahab, E. A. Goghneimy, H. F. Mohamed, E. M. Salama
Abstract:
Infectious diseases are a major cause of death worldwide. The treatment of infections continues to be problematic in modern time because of the severe side effects of some drugs and the growing resistance to antimicrobial agents. Hence, the search for newer, safer and more potent antimicrobials is a pressing need. Herbal medicines have received much attention as a source of new antibacterial drugs since they are considered time-tested and comparatively safe both for human use and the environment. In the present study, the antimicrobial activity of marjoram (Origanum majorana L.) essential oil on some gram positive and gram negative reference bacteria, as well as some hospital resistant microbes, was tested. Marjoram oil was extracted and the oil chemical constituents were identified using GC/MS analysis. Staphylococcus aureas ATCC 6923, Pseudomonus auregonosa ATCC 9027, Bacillus subtilis ATCC 6633, E. coli ATCC 8736 and two hospital resistant microbes isolates 16 and 21 were used. The two isolates were identified by biochemical tests and 16s rRNA as proteus spp. and Enterococcus facielus. The effect of different concentrations of essential oils on bacterial growth was tested using agar disk diffusion assay method to determine the minimum inhibitory concentrations and using micro dilution method to determine the minimum bactericidal concentrations. Marjoram oil was found to be effective against both reference and hospital resistance strains. Hospital strains were more resistant to marjoram oil than reference strains. P. auregonosa growth was completely inhibited at a low concentration of oil (4µl/ml). The other reference strains showed sensitivity to marjoram oil at concentrations ranged from 5 to 7µl/ml. The two hospital strains showed sensitivity at media containing 10 and 15µl/ml oil. The major components of oil were terpineol, cis-beta (23.5%), 1,6 – octadien –3-ol,3,7-dimethyl, 2 aminobenzoate (10.9%), alpha terpieol (8.6%) and linalool (6.3%). Scanning electron microscope (SEM) and transmission electron microscope (TEM) analysis were used to determine the difference between treated and untreated hospital strains. SEM results showed that treated cells were smaller in size than control cells. TEM data showed that cell lysis has occurred to treated cells. Treated cells have ruptured cell wall and appeared empty of cytoplasm compared to control cells which shown to be intact with normal volume of cytoplasm. The results indicated that marjoram oil has a positive antimicrobial effect on hospital resistance microbes. Natural crude extracts can be perfect resources for new antimicrobial drugs.Keywords: antimicrobial activity, essential oil, hospital resistance microbes, marjoram
Procedia PDF Downloads 446622 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 291621 Concepts of the Covid-19 Pandemic and the Implications of Vaccines for Health Security in Nigeria and Diasporas
Authors: Wisdom Robert Duruji
Abstract:
The outbreak of SARS-CoV-2 serotype infection was recorded in January 2020 in Wuhan City, Hubei Province, China. This study examines the concepts of the COVID-19 pandemic and the implications of vaccines for health security in Nigeria and Diasporas. It challenges the widely accepted assumption that the first case of coronavirus infection in Nigeria was recorded on February 27th, 2020, in Lagos. The study utilizes a range of research methods to achieve its objectives. These include the double-layered culture technique, literature review, website knowledge, Google search, news media information, academic journals, fieldwork, and on-site observations. These diverse methods allow for a comprehensive analysis of the concepts and the implications being studied. The study finds that coronavirus infection can be asymptomatic; it may be the antigenicity of the leukocytes (white blood cells), which produce immunogenic hapten or interferons (α, β and γ) that fight infectious parasites, was an immune response that prevented severe virulence in healthy individuals; the reason healthy patients of coronavirus infection in Nigeria naturally recovered after two to three weeks of on-set of infection and test negative. However, the fatality data from the Nigerian Centre for Disease Control (NCDC) is incorrect in this study’s finding; it perused that the fatalities were primarily due to underlying ailments, hunger, and malnutrition in debilitated, comorbid, or compromised patients. This study concluded that the kits and Polymerase Chain Reaction (PCR) machine currently used by the Nigerian Centre for Disease Control (NCDC) in testing and confirming COVID-19 in Nigeria is not ideal; it is programmed and negates separating the strain to its specific serotypes amongst its genera coronavirus, and family Coronaviridae; and might have confirmed patients with the symptoms of febrile caused by cough, catarrh, typhoid and malaria parasites as Covid-19 positive. Therefore, it is recommended that the coronavirus species infected in Nigeria are opportunistic parasites that thrive in human immuno-suppressed conditions like the herpesvirus; it cannot be eradicated by vaccines; the only virucides are interferons, immunoglobulins, and probably synthetic antiviral guanosine drugs like copegus or ribavirin. The findings emphasized that COVID-19 is not the primary pandemic disease in Nigeria; the lockdown was a mirage and not necessary; but rather, pandemic diseases in Nigeria are corruption, nepotism, hunger, and malnutrition caused by ineptitude in governance, religious dichotomy, and ethnic conflicts.Keywords: coronavirus, corruption, Covid-19 pandemic, lock-down, Nigeria, vaccine
Procedia PDF Downloads 68620 Study and Simulation of a Dynamic System Using Digital Twin
Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli
Abstract:
Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models
Procedia PDF Downloads 148619 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 88618 Geochemistry and Petrogenesis of High-K Calc-Alkaline Granitic Rocks of Song, Hawal Massif, N. E. Nigeria
Authors: Ismaila Haruna
Abstract:
The global downfall in fossil energy prices and dwindling oil reserves in Nigeria has ignited interest in the search for alternative sources of foreign income for the country. Solid minerals, particularly Uranium and other base metals like Lead and Zinc have been considered as potentially good options. Several occurrences of this mineral have been discovered in both the sedimentary and granitic rocks of the Hawal and Adamawa Massifs as well as in the adjoining Benue Trough in northeastern Nigeria. However, the paucity of geochemical data and consequent poor petrogenetic knowledge of the granitoids in this region has made exploration works difficult. Song, a small area within the Hawal Massif, was mapped and the collected samples chemically determined in Activation Laboratory, Canada through fusion dissolution technique of Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Field mapping results show that the area is underlain by Granites, diorites with pockets of gneisses and pegmatites and that these rocks consists of microcline, quartz, plagioclase, biotite, hornblende, pyroxene and accessory apatite, zircon, sphene, magnetite and opaques in various proportions. Geochemical data show continous compositional variation from diorite to granites within silica range of 52.69 to 76.04 wt %. Plot of the data on various Harker variation diagrams show distinct evolutionary trends from diorites to granites indicated by decreasing CaO, Fe2O3, MnO, MgO, Ti2O, and increasing K2O with increasing silica. This pattern is reflected in trace elements data which, in general, decrease from diorite to the granites with rising Rb and K. Tectonic, triangular and other diagrams, indicate high-K calc-alkaline trends, syn-collisional granite signatures, I-type characteristics, with CNK/A of less than 1.1 (minimum of 0.58 and maximum of 0.94) and strong potassic character (K2O/Na2O˃1). However, only the granites are slightly peraluminous containing high silica percentage (68.46 to 76.04), K2O (2.71 to 6.16 wt %) with low CaO (1.88 on the average). Chondrite normalised rare earth elements trends indicate strongly fractionated REEs and enriched LREEs with slightly increasing negative Eu anomaly from the diorite to the granite. On the basis of field and geochemical data, the granitoids are interpreted to be high-K calc-alkaline, I-type, formed as a result of hybridization between mantle-derived magma and continental source materials (probably older meta-sediments) in a syn-collisional tectonic setting.Keywords: geochemistry, granite, Hawal Massif, Nigeria, petrogenesis, song
Procedia PDF Downloads 235617 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 406616 Assessment of N₂ Fixation and Water-Use Efficiency in a Soybean-Sorghum Rotation System
Authors: Mmatladi D. Mnguni, Mustapha Mohammed, George Y. Mahama, Alhassan L. Abdulai, Felix D. Dakora
Abstract:
Industrial-based nitrogen (N) fertilizers are justifiably credited for the current state of food production across the globe, but their continued use is not sustainable and has an adverse effect on the environment. The search for greener and sustainable technologies has led to an increase in exploiting biological systems such as legumes and organic amendments for plant growth promotion in cropping systems. Although the benefits of legume rotation with cereal crops have been documented, the full benefits of soybean-sorghum rotation systems have not been properly evaluated in Africa. This study explored the benefits of soybean-sorghum rotation through assessing N₂ fixation and water-use efficiency of soybean in rotation with sorghum with and without organic and inorganic amendments. The field trials were conducted from 2017 to 2020. Sorghum was grown on plots previously cultivated to soybean and vice versa. The succeeding sorghum crop received fertilizer amendments [organic fertilizer (5 tons/ha as poultry litter, OF); inorganic fertilizer (80N-60P-60K) IF; organic + inorganic fertilizer (OF+IF); half organic + inorganic fertilizer (HIF+OF); organic + half inorganic fertilizer (OF+HIF); half organic + half inorganic (HOF+HIF) and control] and was arranged in a randomized complete block design. The soybean crop succeeding fertilized sorghum received a blanket application of triple superphosphate at 26 kg P ha⁻¹. Nitrogen fixation and water-use efficiency were respectively assessed at the flowering stage using the ¹⁵N and ¹³C natural abundance techniques. The results showed that the shoot dry matter of soybean plants supplied with HOF+HIF was much higher (43.20 g plant-1), followed by OF+HIF (36.45 g plant⁻¹), and HOF+IF (33.50 g plant⁻¹). Shoot N concentration ranged from 1.60 to 1.66%, and total N content from 339 to 691 mg N plant⁻¹. The δ¹⁵N values of soybean shoots ranged from -1.17‰ to -0.64‰, with plants growing on plots previously treated to HOF+HIF exhibiting much higher δ¹⁵N values, and hence lower percent N derived from N₂ fixation (%Ndfa). Shoot %Ndfa values varied from 70 to 82%. The high %Ndfa values obtained in this study suggest that the previous year’s organic and inorganic fertilizer amendments to sorghum did not inhibit N₂ fixation in the following soybean crop. The amount of N-fixed by soybean ranged from 106 to 197 kg N ha⁻¹. The treatments showed marked variations in carbon (C) content, with HOF+HIF treatment recording the highest C content. Although water-use efficiency varied from -29.32‰ to -27.85‰, shoot water-use efficiency, C concentration, and C:N ratio were not altered by previous fertilizer application to sorghum. This study provides strong evidence that previous HOF+HIF sorghum residues can enhance N nutrition and water-use efficiency in nodulated soybean.Keywords: ¹³C and ¹⁵N natural abundance, N-fixed, organic and inorganic fertilizer amendments, shoot %Ndfa
Procedia PDF Downloads 169615 Examining Kokugaku as a Pattern of Defining Identity in Global Comparison
Authors: Mária Ildikó Farkas
Abstract:
Kokugaku of the Edo period can be seen as a key factor of defining cultural (and national) identity in the 18th and early 19th century based on Japanese cultural heritage. Kokugaku focused on Japanese classics, on exploring, studying and reviving (or even inventing) ancient Japanese language, literature, myths, history and also political ideology. ‘Japanese culture’ as such was distinguished from Chinese (and all other) cultures, ‘Japanese identity’ was thus defined. Meiji scholars used kokugaku conceptions of Japan to construct a modern national identity based on the premodern and culturalist conceptions of community. The Japanese cultural movement of the 18-19th centuries (kokugaku) of defining cultural and national identity before modernization can be compared not to the development of Western Europe (where national identity strongly attached to modern nation states) or other parts of Asia (where these emerged after the Western colonization), but rather with the ‘national awakening’ movements of the peoples of East Central Europe, a comparison which have not been dealt with in the secondary literature yet. The role of a common language, culture, history and myths in the process of defining cultural identity – following mainly Miroslav Hroch’s comparative and interdisciplinary theory of national development – can be examined compared to the movements of defining identity of the peoples of East Central Europe (18th-19th c). In the shadow of a cultural and/or political ‘monolith’ (China for Japan and Germany for Central Europe), before modernity, ethnic groups or communities started to evolve their own identities with cultural movements focusing on their own language and culture, thus creating their cultural identity, and in the end, a new sense of community, the nation. Comparing actual texts (‘narratives’) of the kokugaku scholars and Central European writers of the nation building period (18th and early 19th centuries) can reveal the similarities of the discourses of deliberate searches for identity. Similar motives of argument can be identified in these narratives: ‘language’ as the primary bearer of collective identity, the role of language in culture, ‘culture’ as the main common attribute of the community; and similar aspirations to explore, search and develop native language, ‘genuine’ culture, ‘original’ traditions. This comparative research offering ‘development patterns’ for interpretation can help us understand processes that may be ambiguously considered ‘backward’ or even ‘deleterious’ (e.g. cultural nationalism) or just ‘unique’. ‘Cultural identity’ played a very important role in the formation of national identity during modernization especially in the case of non-Western communities, who had to face the danger of losing their identities in the course of ‘Westernization’ accompanying modernization.Keywords: cultural identity, Japanese modernization, kokugaku, national awakening
Procedia PDF Downloads 271