Search results for: lean tools and techniques
1643 Designing Nickel Coated Activated Carbon (Ni/AC) Based Electrode Material for Supercapacitor Applications
Authors: Zahid Ali Ghazi
Abstract:
Supercapacitors (SCs) have emerged as auspicious energy storage devices because of their fast charge-discharge characteristics and high power densities. In the current study, a simple approach is used to coat activated carbon (AC) with a thin layer of nickel (Ni) by an electroless deposition process to enhance the electrochemical performance of the SC. The synergistic combination of large surface area and high electrical conductivity of the AC, as well as the pseudocapacitive behavior of the metallic Ni, has shown great potential to overcome the limitations of traditional SC materials. First, the materials were characterized using X-ray diffraction (XRD) for crystallography, scanning electron microscopy (SEM) for surface morphology and energy dispersion X-ray (EDX) for elemental analysis. The electrochemical performance of the nickel-coated activated carbon (Ni-AC) is systematically evaluated through various techniques, including galvanostatic charge-discharge (GCD), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The GCD results revealed that Ni/AC has a higher specific capacitance (1559 F/g) than bare AC (222 F/g) at 1 A/g current density in a 2 M KOH electrolyte. Even at a higher current density of 20 A/g, the Ni/AC showed a high capacitance of 944 F/g as compared to 77 F/g by AC. The specific capacitance (1318 F/g) calculated from CV measurements for Ni-AC at 10mV/sec was in close agreement with GCD data. Furthermore, the bare AC exhibited a low energy of 15 Wh/kg at a power density of 356 W/kg whereas, an energy density of 111 Wh/kg at a power density of 360 W/kg was achieved by Ni/AC-850 electrode and demonstrated a long life cycle with 94% capacitance retention over 50000 charge/discharge cycles at 10 A/g. In addition, the EIS study disclosed that the Rs and Rct values of Ni/AC electrodes were much lower than those of bare AC. The superior performance of Ni/AC is mainly attributed to the presence of excessive redox active sites, large electroactive surface area and corrosive resistance properties of Ni. We believe that this study will provide new insights into the controlled coating of ACs and other porous materials with metals for developing high-performance SCs and other energy storage devices.Keywords: supercapacitor, cyclic voltammetry, coating, energy density, activated carbon
Procedia PDF Downloads 631642 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection
Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra
Abstract:
In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging
Procedia PDF Downloads 871641 Effect of Electropolymerization Method in the Charge Transfer Properties and Photoactivity of Polyaniline Photoelectrodes
Authors: Alberto Enrique Molina Lozano, María Teresa Cortés Montañez
Abstract:
Polyaniline (PANI) photoelectrodes were electrochemically synthesized through electrodeposition employing three techniques: chronoamperometry (CA), cyclic voltammetry (CV), and potential pulse (PP) methods. The substrate used for electrodeposition was a fluorine-doped tin oxide (FTO) glass with dimensions of 2.5 cm x 1.3 cm. Subsequently, structural and optical characterization was conducted utilizing Fourier-transform infrared (FTIR) spectroscopy and UV-visible (UV-vis) spectroscopy, respectively. The FTIR analysis revealed variations in the molar ratio of benzenoid to quinonoid rings within the PANI polymer matrix, indicative of differing oxidation states arising from the distinct electropolymerization methodologies employed. In the optical characterization, differences in the energy band gap (Eg) values and positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were observed, attributable to variations in doping levels and structural irregularities introduced during the electropolymerization procedures. To assess the charge transfer properties of the PANI photoelectrodes, electrochemical impedance spectroscopy (EIS) experiments were carried out within a 0.1 M sodium sulfate (Na₂SO₄) electrolyte. The results displayed a substantial decrease in charge transfer resistance with the PANI coatings compared to uncoated substrates, with PANI obtained through cyclic voltammetry (CV) presenting the lowest charge transfer resistance, contrasting PANI obtained via chronoamperometry (CA) and potential pulses (PP). Subsequently, the photoactive response of the PANI photoelectrodes was measured through linear sweep voltammetry (LSV) and chronoamperometry. The photoelectrochemical measurements revealed a discernible photoactivity in all PANI-coated electrodes. However, PANI electropolymerized through CV displayed the highest photocurrent. Interestingly, PANI derived from chronoamperometry (CA) exhibited the highest degree of stable photocurrent over an extended temporal interval.Keywords: PANI, photocurrent, photoresponse, charge separation, recombination
Procedia PDF Downloads 651640 Comparative Ante-Mortem Studies through Electrochemical Impedance Spectroscopy, Differential Voltage Analysis and Incremental Capacity Analysis on Lithium Ion Batteries
Authors: Ana Maria Igual-Munoz, Juan Gilabert, Marta Garcia, Alfredo Quijano-Lopez
Abstract:
Nowadays, several lithium-ion battery technologies are being commercialized. These chemistries present different properties that make them more suitable for different purposes. However, comparative studies showing the advantages and disadvantages of different chemistries are incomplete or scarce. Different non-destructive techniques are currently being employed to detect how ageing affects the active materials of lithium-ion batteries (LIBs). For instance, electrochemical impedance spectroscopy (EIS) is one of the most employed ones. This technique allows the user to identify the variations on the different resistances present in LIBs. On the other hand, differential voltage analysis (DVA) has shown to be a powerful technique to detect the processes affecting the different capacities present in LIBs. This technique shows variations in the state of health (SOH) and the capacities for one or both electrodes depending on their chemistry. Finally, incremental capacity analysis (ICA) is a widely known technique for being capable of detecting phase equilibria. It reminds of the commonly used cyclic voltamperometry, as it allows detecting some reactions taking place in the electrodes. In these studies, a set of ageing procedures have been applied to commercial batteries of different chemistries (NCA, NMC, and LFP). Afterwards, results of EIS, DVA, and ICA have been used to correlate them with the processes affecting each cell. Ciclability, overpotential, and temperature cycling studies envisage how the charge-discharge rates, cut-off voltage, and operation temperatures affect each chemistry. These studies will serve battery pack manufacturers, as for common battery users, as they will determine the different conditions affecting cells for each of the chemistry. Taking this into account, each cell could be adjusted to the final purpose of the battery application. Last but not least, all the degradation parameters observed are focused to be integrated into degradation models in the future. This fact will allow the implementation of the widely known digital twins to the degradation in LIBs.Keywords: lithium ion batteries, non-destructive analysis, different chemistries, ante-mortem studies, ICA, DVA, EIS
Procedia PDF Downloads 1301639 Nanomaterials Based Biosensing Chip for Non-Invasive Detection of Oral Cancer
Authors: Suveen Kumar
Abstract:
Oral cancer (OC) is the sixth most death causing cancer in world which includes tumour of lips, floor of the mouth, tongue, palate, cheeks, sinuses, throat, etc. Conventionally, the techniques used for OC detection are toluidine blue staining, biopsy, liquid-based cytology, visual attachments, etc., however these are limited by their highly invasive nature, low sensitivity, time consumption, sophisticated instrument handling, sample processing and high cost. Therefore, we developed biosensing chips for non-invasive detection of OC via CYFRA-21-1 biomarker. CYFRA-21-1 (molecular weight: 40 kDa) is secreted in saliva of OC patients which is a non-invasive biological fluid with a cut-off value of 3.8 ng mL-1, above which the subjects will be suffering from oral cancer. Therefore, in first work, 3-aminopropyl triethoxy silane (APTES) functionalized zirconia (ZrO2) nanoparticles (APTES/nZrO2) were used to successfully detect CYFRA-21-1 in a linear detection range (LDR) of 2-16 ng mL-1 with sensitivity of 2.2 µA mL ng-1. Successively, APTES/nZrO2-RGO was employed to prevent agglomeration of ZrO2 by providing high surface area reduced graphene oxide (RGO) support and much wider LDR (2-22 ng mL-1) was obtained with remarkable limit of detection (LOD) as 0.12 ng mL-1. Further, APTES/nY2O3/ITO platform was used for oral cancer bioseneor development. The developed biosensor (BSA/anti-CYFRA-21-1/APTES/nY2O3/ITO) have wider LDR (0.01-50 ng mL-1) with remarkable limit of detection (LOD) as 0.01 ng mL-1. To improve the sensitivity of the biosensing platform, nanocomposite of yattria stabilized nanostructured zirconia-reduced graphene oxide (nYZR) based biosensor has been developed. The developed biosensing chip having ability to detect CYFRA-21-1 biomolecules in the range of 0.01-50 ng mL-1, LOD of 7.2 pg mL-1 with sensitivity of 200 µA mL ng-1. Further, the applicability of the fabricated biosensing chips were also checked through real sample (saliva) analysis of OC patients and the obtained results showed good correlation with the standard protein detection enzyme linked immunosorbent assay (ELISA) technique.Keywords: non-invasive, oral cancer, nanomaterials, biosensor, biochip
Procedia PDF Downloads 1271638 Microstructure of Virgin and Aged Asphalts by Small-Angle X-Ray Scattering
Authors: Dong Tang, Yongli Zhao
Abstract:
The study of the microstructure of asphalt is of great importance for the analysis of its macroscopic properties. However, the peculiarities of the chemical composition of the asphalt itself and the limitations of existing direct imaging techniques have caused researchers to face many obstacles in studying the microstructure of asphalt. The advantage of small-angle X-ray scattering (SAXS) is that it allows quantitative determination of the internal structure of opaque materials and is suitable for analyzing the microstructure of materials. Therefore, the SAXS technique was used to study the evolution of microstructures on the nanoscale during asphalt aging. And the reasons for the change in scattering contrast during asphalt aging were also explained with the help of Fourier transform infrared spectroscopy (FTIR). SAXS experimental results show that the SAXS curves of asphalt are similar to the scattering curves of scattering objects with two-level structures. The Porod curve for asphalt shows that there is no obvious interface between the micelles and the surrounding mediums, and there is only a fluctuation of the hot electron density between the two. The Beaucage model fit SAXS patterns shows that the scattering coefficient P of the asphaltene clusters as well as the size of the micelles, gradually increase with the aging of the asphalt. Furthermore, aggregation exists between the micelles of asphalt and becomes more pronounced with increasing aging. During asphalt aging, the electron density difference between the micelles and the surrounding mediums gradually increases, leading to an increase in the scattering contrast of the asphalt. Under long-term aging conditions due to the gradual transition from maltenes to asphaltenes, the electron density difference between the micelles and the surrounding mediums decreases, resulting in a decrease in the scattering contrast of asphalt SAXS. Finally, this paper correlates the macroscopic properties of asphalt with microstructural parameters, and the results show that the high-temperature rutting resistance of asphalt is enhanced and the low-temperature cracking resistance decreases due to the aggregation of micelles and the generation of new micelles. These results are useful for understanding the relationship between changes in microstructure and changes in properties during asphalt aging and provide theoretical guidance for the regeneration of aged asphalt.Keywords: asphalt, Beaucage model, microstructure, SAXS
Procedia PDF Downloads 801637 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System
Authors: Imran Dayan, Ashiqul Khan
Abstract:
Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining
Procedia PDF Downloads 3351636 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal
Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan
Abstract:
This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal
Procedia PDF Downloads 1141635 Morphology Analysis of Apple-Carrot Juice Treated by Manothermosonication (MTS) and High Temperature Short Time (HTST) Processes
Authors: Ozan Kahraman, Hao Feng
Abstract:
Manothermosonication (MTS), which consists of the simultaneous application of heat and ultrasound under moderate pressure (100-700 kPa), is one of the technologies which destroy microorganisms and inactivates enzymes. Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through an ultra-thin specimen, interacting with the specimen as it passes through it. The environmental scanning electron microscope or ESEM is a scanning electron microscope (SEM) that allows for the option of collecting electron micrographs of specimens that are "wet," uncoated. These microscopy techniques allow us to observe the processing effects on the samples. This study was conducted to investigate the effects of MTS and HTST treatments on the morphology of apple-carrot juices by using TEM and ESEM microscopy. Apple-carrot juices treated with HTST (72 0C, 15 s), MTS 50 °C (60 s, 200 kPa), and MTS 60 °C (30 s, 200 kPa) were observed in both ESEM and TEM microscopy. For TEM analysis, a drop of the solution dispersed in fixative solution was put onto a Parafilm ® sheet. The copper coated side of the TEM sample holder grid was gently laid on top of the droplet and incubated for 15 min. A drop of a 7% uranyl acetate solution was added and held for 2 min. The grid was then removed from the droplet and allowed to dry at room temperature and presented into the TEM. For ESEM analysis, a critical point drying of the filters was performed using a critical point dryer (CPD) (Samdri PVT- 3D, Tousimis Research Corp., Rockville, MD, USA). After the CPD, each filter was mounted onto a stub and coated with gold/palladium with a sputter coater (Desk II TSC Denton Vacuum, Moorestown, NJ, USA). E.Coli O157:H7 cells on the filters were observed with an ESEM (Philips XL30 ESEM-FEG, FEI Co., Eindhoven, The Netherland). ESEM (Environmental Scanning Electron Microscopy) and TEM (Transmission Electron Microscopy) images showed extensive damage for the samples treated with MTS at 50 and 60 °C such as ruptured cells and breakage on cell membranes. The damage was increasing with increasing exposure time.Keywords: MTS, HTST, ESEM, TEM, E.COLI O157:H7
Procedia PDF Downloads 2851634 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing
Authors: Fazl Ullah, Rahmat Ullah
Abstract:
This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation
Procedia PDF Downloads 711633 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act
Authors: Sevcan Ozturk
Abstract:
The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard
Procedia PDF Downloads 3351632 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies
Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili
Abstract:
More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS
Procedia PDF Downloads 661631 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 591630 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites
Authors: A. Kavita Murugkar, B. Anurag Kashyap
Abstract:
With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience
Procedia PDF Downloads 1061629 Real Time Detection of Application Layer DDos Attack Using Log Based Collaborative Intrusion Detection System
Authors: Farheen Tabassum, Shoab Ahmed Khan
Abstract:
The brutality of attacks on networks and decisive infrastructures are on the climb over recent years and appears to continue to do so. Distributed Denial of service attack is the most prevalent and easy attack on the availability of a service due to the easy availability of large botnet computers at cheap price and the general lack of protection against these attacks. Application layer DDoS attack is DDoS attack that is targeted on wed server, application server or database server. These types of attacks are much more sophisticated and challenging as they get around most conventional network security devices because attack traffic often impersonate normal traffic and cannot be recognized by network layer anomalies. Conventional techniques of single-hosted security systems are becoming gradually less effective in the face of such complicated and synchronized multi-front attacks. In order to protect from such attacks and intrusion, corporation among all network devices is essential. To overcome this issue, a collaborative intrusion detection system (CIDS) is proposed in which multiple network devices share valuable information to identify attacks, as a single device might not be capable to sense any malevolent action on its own. So it helps us to take decision after analyzing the information collected from different sources. This novel attack detection technique helps to detect seemingly benign packets that target the availability of the critical infrastructure, and the proposed solution methodology shall enable the incident response teams to detect and react to DDoS attacks at the earliest stage to ensure that the uptime of the service remain unaffected. Experimental evaluation shows that the proposed collaborative detection approach is much more effective and efficient than the previous approaches.Keywords: Distributed Denial-of-Service (DDoS), Collaborative Intrusion Detection System (CIDS), Slowloris, OSSIM (Open Source Security Information Management tool), OSSEC HIDS
Procedia PDF Downloads 3541628 Empowering Learners: From Augmented Reality to Shared Leadership
Authors: Vilma Zydziunaite, Monika Kelpsiene
Abstract:
In early childhood and preschool education, play has an important role in learning and cognitive processes. In the context of a changing world, personal autonomy and the use of technology are becoming increasingly important for the development of a wide range of learner competencies. By integrating technology into learning environments, the educational reality is changed, promoting unusual learning experiences for children through play-based activities. Alongside this, teachers are challenged to develop encouragement and motivation strategies that empower children to act independently. The aim of the study was to reveal the changes in the roles and experiences of teachers in the application of AR technology for the enrichment of the learning process. A quantitative research approach was used to conduct the study. The data was collected through an electronic questionnaire. Participants: 319 teachers of 5-6-year-old children using AR technology tools in their educational process. Methods of data analysis: Cronbach alpha, descriptive statistical analysis, normal distribution analysis, correlation analysis, regression analysis (SPSS software). Results. The results of the study show a significant relationship between children's learning and the educational process modeled by the teacher. The strongest predictor of child learning was found to be related to the role of the educator. Other predictors, such as pedagogical strategies, the concept of AR technology, and areas of children's education, have no significant relationship with child learning. The role of the educator was found to be a strong determinant of the child's learning process. Conclusions. The greatest potential for integrating AR technology into the teaching-learning process is revealed in collaborative learning. Teachers identified that when integrating AR technology into the educational process, they encourage children to learn from each other, develop problem-solving skills, and create inclusive learning contexts. A significant relationship has emerged - how the changing role of the teacher relates to the child's learning style and the aspiration for personal leadership and responsibility for their learning. Teachers identified the following key roles: observer of the learning process, proactive moderator, and creator of the educational context. All these roles enable the learner to become an autonomous and active participant in the learning process. This provides a better understanding and explanation of why it becomes crucial to empower the learner to experiment, explore, discover, actively create, and foster collaborative learning in the design and implementation of the educational content, also for teachers to integrate AR technologies and the application of the principles of shared leadership. No statistically significant relationship was found between the understanding of the definition of AR technology and the teacher’s choice of role in the learning process. However, teachers reported that their understanding of the definition of AR technology influences their choice of role, which has an impact on children's learning.Keywords: teacher, learner, augmented reality, collaboration, shared leadership, preschool education
Procedia PDF Downloads 401627 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 1551626 Monitoring Peri-Urban Growth and Land Use Dynamics with GIS and Remote Sensing Techniques: A Case Study of Burdwan City, India
Authors: Mohammad Arif, Soumen Chatterjee, Krishnendu Gupta
Abstract:
The peri-urban interface is an area of transition where the urban and rural areas meet and interact. So the peri-urban areas, which is characterized by strong urban influence, easy access to markets, services and other inputs, are ready supplies of labour but distant from the land paucity and pollution related to urban growth. Hence, the present study is primarily aimed at quantifying the spatio-temporal pattern of land use/land cover change during the last three decades (i.e., 1987 to 2016) in the peri-urban area of Burdwan city. In the recent past, the morphology of the study region has rapid change due to high growth of population and establishment of industries. The change has predominantly taken place along the State and National Highway 2 (NH-2) and around the Burdwan Municipality for meeting both residential and commercial purposes. To ascertain the degree of change in land use and land cover, over the specified time, satellite imageries and topographical sheets are employed. The data is processed through appropriate software packages to arrive at a deduction that most of the land use changes have occurred by obliterating agricultural land & water bodies and substituting them by built area and industrial spaces. Geospatial analysis of study area showed that this area has experienced a steep increase (30%) of built-up areas and excessive decrease (15%) in croplands between 1987 and 2016. Increase in built-up areas is attributed to the increase of out-migration during this period from the core city. This study also examined social, economic and institutional factors that lead to this rapid land use change in peri-urban areas of the Burdwan city by carrying out a field survey of 250 households in peri-urban areas. The research concludes with an urgency for regulating land subdivisions in peri-urban areas to prevent haphazard land use development. It is expected that the findings of the study would go a long way in facilitating better policy making.Keywords: growth, land use land cover, morphology, peri-urban, policy making
Procedia PDF Downloads 1751625 Living Together Apart: Gender Differences in Transnational Couple Living Perceptions in the Ghanaian Context
Authors: Rodlyn Remina Hines
Abstract:
Males and Females respond differently to life situations, including transnational living. Being in a transnational marriage relationship may put a strain on the relationship requiring partners to adjust their behaviors and expectancies of the other partner to accommodate the disruptions in the relationship. More so, when one partner is an immigrant to a new geographic location with the other in the native country, these disruptions may be intensive. This qualitative study examined gender differences in how married Ghanaian couples respond to making a life together as a couple while living across international borders. The study asked two questions: (1) What are the perceptions of males and females on transnational living? and (2) how do married males and females respond to transnational living situations? To answer these questions, semi-structured interviews were conducted with 24 married couples- with one partner living in the United States (U.S.) and the other spouse in Ghana via purposive and snowball sampling techniques. Participants were aged 26 to 59 years with an average age of 40; the average age of relationship: 10.41; and average years of living apart: 6.7. Induction and deduction hybrid analysis strategies were used to derive emerging themes. The results highlight significant gender differences in response to transnational living status and practices. The data indicate that transnational couples with the male spouse residing in the U.S. experience more relationship strains than is the case when the female partner is the immigrant. Three couples who were in divorce proceedings at the time of the interview had the male partner residing in the U.S. and the female spouse in Ghana. These gender differences also reflected spousal visitation frequency, duration of spousal reunification, amount of and frequency of spousal remittance(s), and immigration processing procedures. Finally, the data show female immigrant partners as better managers of transnational living stresses and strains than their male counterparts. Findings from this study have implications for marriage and family practitioners and immigration policy makers.Keywords: gender differences, , ghanaian couples, ghanaian immigrants, transnational living
Procedia PDF Downloads 841624 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 1021623 Summer STEM Institute in Environmental Science and Data Sciencefor Middle and High School Students at Pace University
Authors: Lauren B. Birney
Abstract:
Summer STEM Institute for Middle and High School Students at Pace University The STEM Collaboratory NYC® Summer Fellows Institute takes place on Pace University’s New York City campus during July and provides the following key features for all participants: (i) individual meetings with Pace faculty to discuss and refine future educational goals; (ii) mentorship, guidance, and new friendships with program leaders; and (iii) guest lectures from professionals in STEM disciplines and businesses. The Summer STEM Institute allows middle school and high school students to work in teams to conceptualize, develop, and build native mobile applications that teach and reinforce skills in the sciences and mathematics. These workshops enhance students’STEM problem solving techniques and teach advanced methods of computer science and engineering. Topics include: big data and analytics at the Big Data lab at Seidenberg, Data Science focused on social and environmental advancement and betterment; Natural Disasters and their Societal Influences; Algal Blooms and Environmental Impacts; Green CitiesNYC; STEM jobs and growth opportunities for the future; renew able energy and sustainable infrastructure; and climate and the economy. In order to better align the existing Summer STEM, Institute with the CCERS model and expand the overall network, Pace is actively recruiting new content area specialists from STEM industries and private sector enterprises to participate in an enhanced summer institute in order to1) nurture student progress and connect summer learning to school year curriculum, 2) increase peer-to-peer collaboration amongst STEM professionals and private sector technologists, and 3) develop long term funding and sponsorship opportunities for corporate sector partners to support CCERS schools and programs directly.Keywords: environmental restoration science, citizen science, data science, STEM
Procedia PDF Downloads 851622 Identification of Suitable Rainwater Harvesting Sites Using Geospatial Techniques with AHP in Chacha Watershed, Jemma Sub-Basin Upper Blue Nile, Ethiopia
Authors: Abrha Ybeyn Gebremedhn, Yitea Seneshaw Getahun, Alebachew Shumye Moges, Fikrey Tesfay
Abstract:
Rainfed agriculture in Ethiopia has failed to produce enough food, to achieve the increasing demand for food. Pinpointing the appropriate site for rainwater harvesting (RWH) have a substantial contribution to increasing the available water and enhancing agricultural productivity. The current study related to the identification of the potential RWH sites was conducted at the Chacha watershed central highlands of Ethiopia which is endowed with rugged topography. The Geographic Information System with Analytical Hierarchy Process was used to generate the different maps for identifying appropriate sites for RWH. In this study, 11 factors that determine the RWH locations including slope, soil texture, runoff depth, land cover type, annual average rainfall, drainage density, lineament intensity, hydrologic soil group, antecedent moisture content, and distance to the roads were considered. The overall analyzed result shows that 10.50%, 71.10%, 17.90%, and 0.50% of the areas were found under highly, moderately, marginally suitable, and unsuitable areas for RWH, respectively. The RWH site selection was found highly dependent on a slope, soil texture, and runoff depth; moderately dependent on drainage density, annual average rainfall, and land use land cover; but less dependent on the other factors. The highly suitable areas for rainwater harvesting expansion are lands having a flat topography with a soil textural class of high-water holding capacity that can produce high runoff depth. The application of this study could be a baseline for planners and decision-makers and support any strategy adoption for appropriate RWH site selection.Keywords: runoff depth, antecedent moisture condition, AHP, weighted overlay, water resource
Procedia PDF Downloads 531621 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 931620 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording
Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen
Abstract:
It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration
Procedia PDF Downloads 1801619 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 1091618 Mental Wellbeing Using Music Intervention: A Case Study of Therapeutic Role of Music, From Both Psychological and Neurocognitive Perspectives
Authors: Medha Basu, Kumardeb Banerjee, Dipak Ghosh
Abstract:
After the massive blow of the COVID-19 pandemic, several health hazards have been reported all over the world. Serious cases of Major Depressive Disorder (MDD) are seen to be common in about 15% of the global population, making depression one of the leading mental health diseases, as reported by the World Health Organization. Various psychological and pharmacological treatment techniques are regularly being reported. Music, a globally accepted mode of entertainment, is often used as a therapeutic measure to treat various health conditions. We have tried to understand how Indian Classical Music can affect the overall well-being of the human brain. A case study has been reported here, where a Flute-rendition has been chosen from a detailed audience response survey, and the effects of that clip on human brain conditions have been studied from both psychological and neural perspectives. Taking help from internationally-accepted depression-rating scales, two questionnaires have been designed to understand both the prolonged and immediate effect of music on various emotional states of human lives. Thereafter, from EEG experiments on 5 participants using the same clip, the parameter ‘ALAY’, alpha frontal asymmetry (alpha power difference of right and left frontal hemispheres), has been calculated. Works of Richard Davidson show that an increase in the ‘ALAY’ value indicates a decrease in depressive symptoms. Using the non-linear technique of MFDFA on EEG analysis, we have also calculated frontal asymmetry using the complexity values of alpha-waves in both hemispheres. The results show a positive correlation between both the psychological survey and the EEG findings, revealing the prominent role of music on the human brain, leading to a decrease in mental unrest and an increase in overall well-being. In this study, we plan to propose the scientific foundation of music therapy, especially from a neurocognition perspective, with appropriate neural bio-markers to understand the positive and remedial effects of music on the human brain.Keywords: music therapy, EEG, psychological survey, frontal alpha asymmetry, wellbeing
Procedia PDF Downloads 411617 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2191616 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model
Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis
Abstract:
Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry
Procedia PDF Downloads 2251615 Unveiling the Nexus: A Holistic Investigation on the Role of Cultural Beliefs and Family Dynamics in Shaping Maternal Health in Primigravida Women
Authors: Anum Obaid, Bushra Noor, Zoshia Zainab
Abstract:
In South Asian countries, Pakistan faces significant public health challenges regarding maternal and neonatal health (MNH). Despite global efforts to improve maternal, newborn, child, and health (MNCH) outcomes through initiatives like the Millennium Development Goals (MDGs) and Sustainable Development Goals (SDGs), high maternal and neonatal mortality rates persist. In patriarchal societies, cultural norms, family dynamics, and gender roles heavily influence healthcare accessibility and decision-making processes, often leading to delayed and inadequate maternal care. Addressing these socio-cultural barriers and enhancing healthcare resources is crucial to improving maternal health outcomes in areas like Faisalabad. A qualitative study was conducted involving two groups of informants: gynecologists practicing in private clinics and first-time pregnant women receiving care in government hospitals. Data collection included obtaining institutional permission, conducting semi-structured in-depth interviews, and using non-probability sampling techniques. A proactive strategy to overcome maternal health challenges involves using aversion therapy and disseminating knowledge among family members. This approach aims to foster a deep understanding within the family unit regarding the importance of maternal well-being, thereby creating a supportive environment and facilitating informed decision-making related to healthcare access and lifestyle choices. The findings indicate that maternal health is compromised both physiologically and psychologically, with significant implications for the baby's health. Mental well-being is profoundly affected, largely due to familial behavior and entrenched cultural taboos.Keywords: maternal health, neonatal health, socio-cultural norms, primigravida women, gynecologist, familial conduct, cultural taboos
Procedia PDF Downloads 401614 Electroremediation of Saturated and Unsaturated Nickel-Contaminated Soils
Authors: Waddah Abdullah, Saleh Al-Sarem
Abstract:
Electrokinetic remediation was undoubtedly proven to be one of the most efficient techniques used to clean up soils contaminated with polar charged contaminants (such as heavy metals) and non-polar organic contaminants. It can be efficiently used to clean up low permeability mud, wastewater, electroplating wastes, sludge, and marine dredging. This study presented and discussed the results of electrokinetic remediation processes to clean up soils contaminated with nickel. Two types of electrokinetics cells were used: an open cell and an advanced cylindrical cell. Two types of soils were used for this investigation; the Azraq green clay which has very low permeability taken from the eastern part of Jordan (city of Azraq) and a sandy soil having, relatively, very high permeability. The clayey soil was spiked with 500 ppm of nickel, and the sandy soil was spiked with 1500 ppm of nickel. Fully saturated and partially saturated clayey soils were used for the clean-up process. Clayey soils were tested under a direct current of 80 mA and 50 mA to study the effect of the electrical current on the remediation process. Chelating agent (Na-EDTA), disodium ethylene diamine tetraacetatic acid, was used in both types of soils to enhance the electroremediation process. The effect of carbonates presence in the contaminated soils, also, was investigated by use of sodium carbonate and calcium carbonate. pH changes in the anode and the cathode compartments were controlled by use of buffer solutions. The results of the investigation showed that for the fully saturated clayey soil spiked with nickel had an average removal efficiency of 64%, and the average removal efficiency was 46% for the unsaturated clayey soil. For the sandy soil, the average removal efficiency of Nickel was 90%. Test results showed that presence of carbonates in the remediated soils retarded the clean-up process of nickel-contaminated soils (removal efficiency was reduced from 90% to 60%). EDTA enhanced decontamination of nickel contaminated clayey and sandy soils with carbonates was studied. The average removal efficiency increased from 60% (prior to using EDTA) to more than 90% after using EDTA.Keywords: buffer solution, EDTA, electroremediation, nickel removal efficiency
Procedia PDF Downloads 184