Search results for: computational methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16989

Search results for: computational methods

13389 Investigating the Influence of Roof Fairing on Aerodynamic Drag of a Bluff Body

Authors: Kushal Kumar Chode

Abstract:

Increase in demand for fuel saving and demand for faster vehicles with decent fuel economy, researchers around the world started investigating in various passive flow control devices to improve the fuel efficiency of vehicles. In this paper, A roof fairing was investigated for reducing the aerodynamic drag of a bluff body. The bluff body considered for this work is Ahmed model with a rake angle of 25deg was and subjected to flow with a velocity of 40m/s having Reynolds number of 2.68million was analysed using a commercial Computational Fluid Dynamic (CFD) code Star CCM+. It was evident that pressure drag is the main source of drag on an Ahmed body from the initial study. Adding a roof fairing has delayed the flow separation and resulted in delaying wake formation, thus improving the pressure in near weak and reducing the wake region. Adding a roof fairing of height and length equal to 1/7H and 1/3L respectively has shown a drag reduction by 9%. However, an optimised fairing, which was obtained by changing height, length and width by 5% increase, recorded a drag reduction close 12%.

Keywords: Ahmed model, aerodynamic drag, passive flow control, roof fairing, wake formation

Procedia PDF Downloads 446
13388 Sustainability of Heritage Management in Aksum: Focus on Heritage Conservation and Interpretation

Authors: Gebrekiros Welegebriel Asfaw

Abstract:

The management of the fragile, unique and irreplaceable cultural heritage from different perspectives is becoming a major challenge as important elements of culture are vanishing throughout the globe. The major purpose of this study is to assess how the cultural heritages of Aksum are managed for their future sustainability from heritage conservation and interpretation perspectives. Descriptive type of research design inculcating both quantitative and qualitative research methods is employed. Primary quantitative data was collected from 189 respondents (19 professionals, 88 tourism service providers and 82 tourists) and interview was conducted with 33 targeted informants from heritage and related professions, security employees, local community, service providers and church representatives by applying probability and non probability sampling methods. Findings of the study reveal that the overall sustainable management status of the cultural heritage of Aksum is below average. It is found that the sustainability of cultural heritage management in Aksum is facing a lot of unfavorable factors like lack of long term planning, incompatible system of heritage administration, limited capacity and number of professionals, scant attention to community based heritage and tourism development, dirtiness and drainage problems, problems with stakeholder involvement and cooperation, lack of organized interpretation and presentation systems and others. So, re-organization of the management system, creating platform for coordination among stakeholders and developing appropriate interpretation system can be good remedies. Introducing community based heritage and tourism development concept is also recommendable for a long term win-win success in Aksum.

Keywords: Aksum, conservation, interpretation, Sustainable Cultural Heritage Management

Procedia PDF Downloads 327
13387 Investigating the Effects of Two Functional and Extra-Functional Stretching Methods of the Leg Muscles on a Selection of Kinematical and Kinetic Indicators in Women with Ankle Instability

Authors: Parvin Malhami

Abstract:

The purpose of the present study was to investigate the effects of two functional and functional stretching methods of the leg muscles on a selection of kinematical and kinetic indicators among women with ankle instability. Twenty-four persons were targeted and randomly divided into the functional exercise (8 persons), extra-functional exercise (8 persons) and control (8 persons) groups on the basis of inclusion and exclusion criteria. The experimental groups received stretching for eight weeks, 3 sessions each week, and the control group merely performed its daily activities. Then, in order to measure the pre -test and post -test variables, the dorsi flexion, Plantar flexion and ground reaction force were investigated and measured. Data were analyzed using paired T-test and independent T-tests at a significant level of 0.05. All statistical analyses were conducted using SPSS 25 software. The results of the T-test showed the significant effect of eight weeks of functional and Extra functional exercises on dorsi Flexion, Plantar Flexion and ground reaction force. (P≤ 0/001). The results of this study showed that the implementation of the functional and Extra-functional exercise protocol had an impact on the amount of Ankle dorsi Flexion and the Plantar felxion of women with an ankle instability. It was also found that muscle flexibility following the stretch ability of the gastrocnemius muscles facilitates the walking of the wrist installation by affecting the amount of wrist flexion, so these people are recommended to use the functional and extra-functional exercise protocol.

Keywords: functional stretching, extra functional stretching, dorsi flexion, plantar flexion

Procedia PDF Downloads 75
13386 Characterization of Enterotoxigenic Escherichia coli CS6 Promoter

Authors: Mondal Indranil, Bhakat Debjyoti, Mukhopadayay Asish K., Chatterjee Nabendu S.

Abstract:

CS6 is the prevalent CF in our region and deciphering its molecular regulators would play a pivotal role in reducing the burden of ETEC pathogenesis. In prokaryotes, most of the genes are under the control of one operon and the promoter present upstream of the gene regulates the transcription of that gene. Here the promoter of CS6 was characterized by computational method and further analyzed by β-galactosidase assay and sequencing. Promoter constructs and deletions were prepared as required to analyze promoter activity. The effect of different additives on the CS6 promoter was analysed by the β-galactosidase assay. Bioinformatics analysis done by Softberry/BPROM predicted fur, lrp, and crp boxes, -10 and -35 region upstream of the CS6 gene. The promoter construction in no promoter plasmid pTL61T showed that region -573 to +1 is actually the promoter region as predicted. Sequential deletion of the region upstream of CS6 revealed that promoter activity remains the same when -573bp to -350bp is deleted. But after the deletion of the upstream region -350 bp to -255bp, promoter expression decreases drastically to 26%. Further deletion also decreases promoter activity up to a little range. So the region -355bp to -255bp holds the promoter sequence for the CS6 gene. Additives like iron, NaCl, etc., modulate promoter activity in a dose-dependent manner. From the promoter analysis, it can be said that the minimum region lies between -254 and +1. Important region(s) lies between -350 bp to -255 bp upstream in the promoter, which might have important elements needed to control CS6 gene expression.

Keywords: microbiology, promoter, colonization factor, ETEC

Procedia PDF Downloads 167
13385 Supervisor Controller-Based Colored Petri Nets for Deadlock Control and Machine Failures in Automated Manufacturing Systems

Authors: Husam Kaid, Abdulrahman Al-Ahmari, Zhiwu Li

Abstract:

This paper develops a robust deadlock control technique for shared and unreliable resources in automated manufacturing systems (AMSs) based on structural analysis and colored Petri nets, which consists of three steps. The first step involves using strict minimal siphon control to create a live (deadlock-free) system that does not consider resource failure. The second step uses an approach based on colored Petri net, in which all monitors designed in the first step are merged into a single monitor. The third step addresses the deadlock control problems caused by resource failures. For all resource failures in the Petri net model a common recovery subnet based on colored petri net is proposed. The common recovery subnet is added to the obtained system at the second step to make the system reliable. The proposed approach is evaluated using an AMS from the literature. The results show that the proposed approach can be applied to an unreliable complex Petri net model, has a simpler structure and less computational complexity, and can obtain one common recovery subnet to model all resource failures.

Keywords: automated manufacturing system, colored Petri net, deadlocks, siphon

Procedia PDF Downloads 133
13384 Computational Determination of the Magneto Electronic Properties of Ce₁₋ₓCuₓO₂ (x=12.5%): Emerging Material for Spintronic Devices

Authors: Aicha Bouhlala, Sabah Chettibi

Abstract:

Doping CeO₂ with transition metals is an effective way of tuning its properties. In the present work, we have performed self-consistent ab-initio calculation using the full-potential linearized augmented plane-wave method (FP-LAPW), based on the density functional theory (DFT) as implemented in the Wien2k simulation code to study the structural, electronic, and magnetic properties of the compound Ce₁₋ₓCuₓO₂ (x=12.5%) fluorite type oxide and to explore the effects of dopant Cu in ceria. The exchange correlation potential has been treated using the Perdew-Burke-Eenzerhof revised of solid (PBEsol). In structural properties, the equilibrium lattice constant is observed for the compound, which exists within the value of 5.382 A°. In electronic properties, the spin-polarized electronic bandstructure elucidates the semiconductor nature of the material in both spin channels, with the compound was observed to have a narrow bandgap on the spin-down configuration (0.162 EV) and bandgap on the spin-up (2.067 EV). Hence, the doped atom Cu plays a vital role in increasing the magnetic moments of the supercell, and the value of the total magnetic moment is found to be 2.99438 μB. Therefore, the compound Cu-doped CeO₂ shows a strong ferromagnetic behavior. The predicted results propose the compound could be a good candidate for spintronics applications.

Keywords: Cu-doped CeO₂, DFT, Wien2k, properties

Procedia PDF Downloads 260
13383 Multi-Dimensional (Quantatative and Qualatative) Longitudinal Research Methods for Biomedical Research of Post-COVID-19 (“Long Covid”) Symptoms

Authors: Steven G. Sclan

Abstract:

Background: Since December 2019, the world has been afflicted by the spread of the Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), which is responsible for the condition referred to as Covid-19. The illness has had a cataclysmic impact on the political, social, economic, and overall well-being of the population of the entire globe. While Covid-19 has had a substantial universal fatality impact, it may have an even greater effect on the socioeconomic, medical well-being, and healthcare planning for remaining societies. Significance: As these numbers illustrate, many more persons survive the infection than die from it, and many of those patients have noted ongoing, persistent symptoms after successfully enduring the acute phase of the illness. Recognition and understanding of these symptoms are crucial for developing and arranging efficacious models of care for all patients (whether or not having been hospitalized) surviving acute covid illness and plagued by post-acute symptoms. Furthermore, regarding Covid infection in children (< 18 y/o), although it may be that Covid “+” children are not major vectors of infective transmission, it now appears that many more children than initially thought are carrying the virus without accompanying obvious symptomatic expression. It seems reasonable to wonder whether viral effects occur in children – those children who are Covid “+” and now asymptomatic – and if, over time, they might also experience similar symptoms. An even more significant question is whether Covid “+” asymptomatic children might manifest increased multiple health problems as they grow – i.e., developmental complications (e.g., physical/medical, metabolic, neurobehavioral, etc.) – in comparison to children who had been consistently Covid “ - ” during the pandemic. Topics Addressed and Theoretical Importance: This review is important because of the description of both quantitative and qualitative methods for clinical and biomedical research. Topics reviewed will consider the importance of well-designed, comprehensive (i.e., quantitative and qualitative methods) longitudinal studies of Post Covid-19 symptoms in both adults and children. Also reviewed will be general characteristics of longitudinal studies and a presentation of a model for a proposed study. Also discussed will be the benefit of longitudinal studies for the development of efficacious interventions and for the establishment of cogent, practical, and efficacious community healthcare service planning for post-acute covid patients. Conclusion: Results of multi-dimensional, longitudinal studies will have important theoretical implications. These studies will help to improve our understanding of the pathophysiology of long COVID and will aid in the identification of potential targets for treatment. Such studies can also provide valuable insights into the long-term impact of COVID-19 on public health and socioeconomics.

Keywords: COVID-19, post-COVID-19, long COVID, longitudinal research, quantitative research, qualitative research

Procedia PDF Downloads 63
13382 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 228
13381 Improving Healthcare Readiness to Respond to Human Trafficking: A Case Study

Authors: Traci A. Hefner

Abstract:

Limited research exists on the readiness of emergency departments to respond to human trafficking (HT). The purpose of this qualitative case study was to improve the readiness of a Department of Emergency Medicine (ED), located in the southeast region of the United States, in identifying, assessing, and responding to trafficked individuals. The research objectives were to 1) provide an organizing framework to understand the ED’s readiness to respond to HT, using the Transtheoretical Model’s stages of change construct, 2) explain the readiness of the ED through a three-pronged contextual approach that included policies and procedures, patient data collection processes, and clinical practice methods, and 3) develop recommendations to respond to HT. Content analysis was used for document reviews and on-site observations, while thematic analysis identified themes of staff perceptions of the ED’s readiness in interviews of over 30 clinical and non-clinical healthcare professionals. Results demonstrated low levels of readiness to identify HT through the ED’s policies and procedures, data collection processes, and clinical practice methods. Clinical practice-related factors consisted of limited awareness of HT warning signs and low-levels of knowledge about community resources for possible HT referrals. Policy and practice recommendations to increase the ED’s readiness to respond to HT included: developing staff trainings across the ED system to enhance awareness of HT warning signs, incorporating HT into current policies and procedures for vulnerable patient populations as well as creating a HT protocol that addresses policies and procedures, screening tools, and community referrals.

Keywords: emergency medicine, human trafficking, organizational assessment, stages of change

Procedia PDF Downloads 151
13380 Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation

Authors: Mahmut Yildirim

Abstract:

This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM.

Keywords: bidirectional long short-term memory, deep learning, maximum likelihood, OFDM with all index modulation, signal detection

Procedia PDF Downloads 77
13379 Numerical Iteration Method to Find New Formulas for Nonlinear Equations

Authors: Kholod Mohammad Abualnaja

Abstract:

A new algorithm is presented to find some new iterative methods for solving nonlinear equations F(x)=0 by using the variational iteration method. The efficiency of the considered method is illustrated by example. The results show that the proposed iteration technique, without linearization or small perturbation, is very effective and convenient.

Keywords: variational iteration method, nonlinear equations, Lagrange multiplier, algorithms

Procedia PDF Downloads 549
13378 Geophysical Approach in the Geological Characterization of a Dam Site: Case of the Chebabta-Dam, Meskiana, Oum El-Bouaghi

Authors: Benhammadi Hocine, Djamel Boubaya, Chaffai Hicham

Abstract:

Meskiana Area is characterized by a semi-arid climate where the water supply for irrigation and industry is not sufficient as the priority goes for domestic use. To meet the increasing population growth and development, the authorities have considered building a new water retaining structure on some major temporary water streams. For this purpose Chebabta site on Oued Meskiana was chosen as the future dam site. It is large enough to store the desired volume of water. This study comes to investigate the conditions of the site and the adequacy of the ground as a foundation for the projected dam. The conditions of the site include the geological structure and mainly the presence of discontinuities in the formation on which the dam will be built, the nature of the lithologies under the foundation and the future lake, and the presence of any hazard. This site characterization is usually carried out using different methods in order to highlight any underground buried problematic structure. In this context, the different geophysical technics remain the most used ones. Three geophysical methods were used in the case of the Chebabta dam site, namely, electric survey, seismic refraction, and tomography. The choice of the technics and the location of the scan line was made on the basis of the available geological data. In this sense, profiles have been established on both banks of Oued Meskiana. The obtained results have allowed a better characterization of the geological structure, defining the limit between the surface cover and the bedrock, which is, in other words, the limit between the weathered zone and the bedrock. Their respective thicknesses were also determined by seismic refraction and electrical resistivity sounding. However, the tomography imaging technic has succeeded in positioning a fault structure passing through the right bank of the wadi.

Keywords: dam site, fault, geophysic, investigation, Meskiana

Procedia PDF Downloads 93
13377 Defining Priority Areas for Biodiversity Conservation to Support for Zoning Protected Areas: A Case Study from Vietnam

Authors: Xuan Dinh Vu, Elmar Csaplovics

Abstract:

There has been an increasing need for methods to define priority areas for biodiversity conservation since the effectiveness of biodiversity conservation in protected areas largely depends on the availability of material resources. The identification of priority areas requires the integration of biodiversity data together with social data on human pressures and responses. However, the deficit of comprehensive data and reliable methods becomes a key challenge in zoning where the demand for conservation is most urgent and where the outcomes of conservation strategies can be maximized. In order to fill this gap, the study applied an environmental model Condition–Pressure–Response to suggest a set of criteria to identify priority areas for biodiversity conservation. Our empirical data has been compiled from 185 respondents, categorizing into three main groups: governmental administration, research institutions, and protected areas in Vietnam by using a well - designed questionnaire. Then, the Analytic Hierarchy Process (AHP) theory was used to identify the weight of all criteria. Our results have shown that priority level for biodiversity conservation could be identified by three main indicators: condition, pressure, and response with the value of the weight of 26%, 41%, and 33%, respectively. Based on the three indicators, 7 criteria and 15 sub-criteria were developed to support for defining priority areas for biodiversity conservation and zoning protected areas. In addition, our study also revealed that the groups of governmental administration and protected areas put a focus on the 'Pressure' indicator while the group of Research Institutions emphasized the importance of 'Response' indicator in the evaluation process. Our results provided recommendations to apply the developed criteria for identifying priority areas for biodiversity conservation in Vietnam.

Keywords: biodiversity conservation, condition–pressure–response model, criteria, priority areas, protected areas

Procedia PDF Downloads 177
13376 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 67
13375 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 225
13374 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 75
13373 The Transformation of Architecture through the Technological Developments in History: Future Architecture Scenario

Authors: Adel Gurel, Ozge Ceylin Yildirim

Abstract:

Nowadays, design and architecture are being affected and underwent change with the rapid advancements in technology, economics, politics, society and culture. Architecture has been transforming with the latest developments after the inclusion of computers into design. Integration of design into the computational environment has revolutionized the architecture and new perspectives in architecture have been gained. The history of architecture shows the various technological developments and changes in which the architecture has transformed with time. Therefore, the analysis of integration between technology and the history of the architectural process makes it possible to build a consensus on the idea of how architecture is to proceed. In this study, each period that occurs with the integration of technology into architecture is addressed within historical process. At the same time, changes in architecture via technology are identified as important milestones and predictions with regards to the future of architecture have been determined. Developments and changes in technology and the use of technology in architecture within years are analyzed in charts and graphs comparatively. The historical process of architecture and its transformation via technology are supported with detailed literature review and they are consolidated with the examination of focal points of 20th-century architecture under the titles; parametric design, genetic architecture, simulation, and biomimicry. It is concluded that with the historical research between past and present; the developments in architecture cannot keep up with the advancements in technology and recent developments in technology overshadow the architecture, even the technology decides the direction of architecture. As a result, a scenario is presented with regards to the reach of technology in the future of architecture and the role of the architect.

Keywords: computer technologies, future architecture, scientific developments, transformation

Procedia PDF Downloads 195
13372 Characterization and Degradation of 3D Printed Polycaprolactone-Freeze Dried Bone Matrix Constructs for Use in Critical Sized Bone Defects

Authors: Samantha Meyr, Eman Mirdamadi, Martha Wang, Tao Lowe, Ryan Smith, Quinn Burke

Abstract:

Critical-sized bone defects (CSD) treatment options remain a major clinical orthopedic challenge. They are uniquely contoured diseased or damaged bones and can be defined as those that will not heal spontaneously and require surgical intervention. Autografts are the current gold standard CSD treatment, which are histocompatible and provoke a minimal immunogenic response; however, they can cause donor site morbidity and will not suffice for the size required for replacement. As an alternative to traditional surgical methods, bone tissue engineering will be implemented via 3D printing methods. A freeze-dried bone matrix (FDBM) is a type of graft material available but will only function as desired when in the presence of bone growth factors. Polycaprolactone (PCL) is a known biodegradable material with good biocompatibility that has been proven manageable in 3D printing as a medical device. A 3D-extrusion printing strategy is introduced to print these materials into scaffolds for bone grafting purposes, which could be more accessible and rapid than the current standard. Mechanical, thermal, cytotoxic, and physical properties were investigated throughout a degradation period of 6 months using fibroblasts and dental pulp stem cells. PCL-FDBM scaffolds were successfully printed with high print fidelity in their respective pore sizes and allograft content. Additionally, we have created a method for evaluating PCL using differential scanning calorimetry (DSC) and have evaluated PCL degradation over roughly 6 months.

Keywords: 3D printing, bone tissue engineering, cytotoxicity, degradation, scaffolds

Procedia PDF Downloads 111
13371 Electrochemical Coordination Polymers of Copper(II) Synthesis by Using Rigid and Felexible Ligands

Authors: P. Mirahmadpour, M. H. Banitaba, D. Nematollahi

Abstract:

The chemistry of coordination polymers in recent years has grown exponentially not only because of their interesting architectures but also due to their various technical applications in many fields including ion exchange, chemical catalysis, small molecule separations, and drug release. The use of bridging ligands for the controlled self-assembly of one, two or three dimensional metallo-supramolecular species is the subject of serious study in last decade. Numerous different synthetic methods have been offered for the preparation of coordination polymers such as (a) diffusion from the gas phase, (b) slow diffusion of the reactants into a polymeric matrix, (c) evaporation of the solvent at ambient or reduced temperatures, (d) temperature controlled cooling, (e) precipitation or recrystallisation from a mixture of solvents and (f) hydrothermal synthesis. The electrosynthetic process suggested several advantages over conventional approaches. A general advantage of electrochemical synthesis is that it allows synthesis under milder conditions than typical solvothermal or microwave synthesis. In this work we have introduced a simple electrochemical method for growing metal coordination polymers based on copper with a flexible 2,2’-thiodiacetic acid (TDA) and rigid 1,2,4,5-benzenetetracarboxylate (BTC) ligands. The structure of coordination polymers were characterized by scanning electron microscopy (SEM), X-ray powder diffraction (XRD), elemental analysis, thermal gravimetric (TG) and differential thermal analyses (DTA). The single-crystal X-ray diffraction analysis revealed that different conformations of the ligands and different coordination modes of the carboxylate group as well as different coordination geometries of the copper atoms. Electrochemical synthesis of coordination polymers has different advantages such as faster synthesis at lower temperature in compare with conventional chemical methods and crystallization of desired materials in a single synthetic step.

Keywords: 1, 2, 4, 5-benzenetetracarboxylate, coordination polymer, copper, 2, 2’-thiodiacetic acid

Procedia PDF Downloads 212
13370 Free Radical Scavenging, Antioxidant Activity, Phenolic, Alkaloids Contents and Inhibited Properties against α-Amylase and Invertase Enzymes of Stem Bark Extracts Coula edulis B

Authors: Eric Beyegue, Boris Azantza, Judith Laure Ngondi, Julius E. Oben

Abstract:

Background: It is clearly that phytochemical constituents of plants in relation exhibit free radical scavenging, antioxidant and glycosylation properties. This study investigated the in vitro antioxidant and free radical scavenging, inhibited activities against α-amylase and invertase enzymes of stem bark extracts C. edulis (Olacaceae). Methods: Four extracts (hexane, dichloromethane, ethanol and aqueous) from the barks of C. edulis were used in this study. Colorimetric in vitro methods were using for evaluate free radical scavenging activity DPPH, ABTS, NO, OH, antioxidant capacity, glycosylation activity, inhibition of α-amylase and invertase activities, phenolic, flavonoid and alkaloid contents. Results: C. edulis extracts (CEE) had a higher scavenging potential on the 2, 2-diphenyl-1-picrylhydrazyl (DPPH), hydroxyl (OH), nitrite oxide (NO), 2, 2-azinobis (3-ethylbenzthiazoline)-6-sulfonic acid (ABTS) radicals and glucose scavenging with the IC50 varied between 41.95 and 36694.43 µg/ml depending on the solvent of extraction. The ethanol extract of C. edulis stem bark (CE EtOH) showed the highest polyphenolic (289.10 + 30.32), flavonoid (1.12 + 0.09) and alkaloids (18.47 + 0.16) content. All the tested extracts demonstrated a relative high inhibition potential against α-amylase and invertase digestive enzymes activities. Conclusion: This study suggests that CEE exhibited higher antioxidant potential and significant inhibition potential against digestive enzymes.

Keywords: Coula edulis, antioxidant, scavenging activity, amylase, invertase

Procedia PDF Downloads 354
13369 Study of Methods to Reduce Carbon Emissions in Structural Engineering

Authors: Richard Krijnen, Alan Wang

Abstract:

As the world is aiming to reach net zero around 2050, structural engineers must begin finding solutions to contribute to this global initiative. Approximately 40% of global energy-related emissions are due to buildings and construction, and a building’s structure accounts for 50% of its embodied carbon, which indicates that structural engineers are key contributors to finding solutions to reach carbon neutrality. However, this task presents a multifaceted challenge as structural engineers must navigate technical, safety and economic considerations while striving to reduce emissions. This study reviews several options and considerations to reduce carbon emissions that structural engineers can use in their future designs without compromising the structural integrity of their proposed design. Low-carbon structures should adhere to several guiding principles. Firstly, prioritize the selection of materials with low carbon footprints, such as recyclable or alternative materials. Optimization of design and engineering methods is crucial to minimize material usage. Encouraging the use of recyclable and renewable materials reduces dependency on natural resources. Energy efficiency is another key consideration involving the design of structures to minimize energy consumption across various systems. Choosing local materials and minimizing transportation distances help in reducing carbon emissions during transport. Innovation, such as pre-fabrication and modular design or low-carbon concrete, can further cut down carbon emissions during manufacturing and construction. Collaboration among stakeholders and sharing experiences and resources are essential for advancing the development and application of low-carbon structures. This paper identifies current available tools and solutions to reduce embodied carbon in structures, which can be used as part of daily structural engineering practice.

Keywords: efficient structural design, embodied carbon, low-carbon material, sustainable structural design

Procedia PDF Downloads 47
13368 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 132
13367 Determination of the Informativeness of Instrumental Research Methods in Assessing Risk Factors for the Development of Renal Dysfunction in Elderly Patients with Chronic Ischemic Heart Disease

Authors: Aksana N. Popel, Volha A. Sujayeva, Olga V. Kоshlataja, Irеna S. Karpava

Abstract:

Introduction: It is a known fact that cardiovascular pathology and its complications cause a more severe course and worse prognosis in patients with comorbid kidney pathology. Chronic kidney disease (CKD) is associated with inflammation, endothelial dysfunction, and increased activity of the sympathoadrenal system. This circumstance increases the risk of cardiovascular diseases and the progression of kidney pathology. The above determines the need to identify cardiorenal changes at early stages to reduce the risks of cardiovascular complications and the progression of CKD. Objective: To identify risk factors (RF) for the development of CKD in elderly patients with chronic ischemic heart disease (CIHD). Methods: The study included 64 patients (40 women and 24 men) with a mean age of 74.4±4.5 years with coronary heart disease, without a history of structural kidney pathology and CKD. All patients underwent transthoracic echocardiography (TTE) and kidney ultrasound (KU) using GE Vivid 9 equipment (GE HealthCare, USA), and cardiac computed tomography (CCT) using Siemens Somatom Force equipment (Siemens Healthineers AG, Germany) in 3 months and in 1 year. Data obtained were analyzed using multiple regression analysis and nonparametric Mann-Whitney test. Statistical analysis was performed using the STATISTICA 12.0 program (StatSoft Inc.). Results: Initially, CKD was not diagnosed in all patients. In 3 months, CKD was diagnosed: stage C1 had 11 people (18%), stage C2 had 4 people (6%), stage C3A had 11 people (18%), stage C3B had 2 people (3%). After 1 year, CKD was diagnosed: stage C1 had 22 people (35%), stage C2 had 5 people (8%), stage C3A had 17 people (27%), stage C3B had 10 people (15%). In 3 months, statistically significant (p<0.05) risk factors were: 1) according to TTE: mitral peak E-wave velocity (U=678, p=0.039), mitral E-velocity DT (U=514, p=0.0168), mitral peak A-wave velocity (U=682, p=0.013). In 1 year, statistically significant (p<0.05) risk factors were: according to TTE: left ventricular (LV) end-systolic volume in B-mode (U=134, p=0.006), LV end-diastolic volume in B-mode (U=177, p=0.04), LV ejection fraction in B-mode (U=135, p=0.006), left atrial volume (U=178, p=0.021), LV hypertrophy (U=294, p=0.04), mitral valve (MV) fibrosis (U=328, p=0.01); according CCT: epicardial fat thickness (EFT) on the right ventricle (U=8, p=0.015); according to KU: interlobar renal artery resistance index (RI) (U=224, p=0.02), segmental renal artery RI (U=409, p=0.016). Conclusions: Both TTE and KU are very informative methods to determine the additional risk factors of CKD development and progression. The most informative risk factors were LV global systolic and diastolic functions, LV and LA volumes. LV hypertrophy, MV fibrosis, interlobar renal artery and segmental renal artery RIs, EFT.

Keywords: chronic kidney disease, ischemic heart disease, prognosis, risk factors

Procedia PDF Downloads 30
13366 Digital Leadership and HR practices

Authors: Joanna Konstantinou

Abstract:

Due to the pandemic, we have recently witnessed an explosion of HR Tech offering a variety of solutions for digital transformation, as well as a large number of HR practices implemented by professionals both in data science and occupational psychology. The aim of this study is to explore the impact of these practices and their effectiveness and to develop an understanding of digital leadership. The study will be based on semi-structured interviews using qualitative research methods and tools.

Keywords: HR practices, digital trasformation, pandemic, digital leadership

Procedia PDF Downloads 211
13365 Deciding Graph Non-Hamiltonicity via a Closure Algorithm

Authors: E. R. Swart, S. J. Gismondi, N. R. Swart, C. E. Bell

Abstract:

We present an heuristic algorithm that decides graph non-Hamiltonicity. All graphs are directed, each undirected edge regarded as a pair of counter directed arcs. Each of the n! Hamilton cycles in a complete graph on n+1 vertices is mapped to an n-permutation matrix P where p(u,i)=1 if and only if the ith arc in a cycle enters vertex u, starting and ending at vertex n+1. We first create exclusion set E by noting all arcs (u, v) not in G, sufficient to code precisely all cycles excluded from G i.e. cycles not in G use at least one arc not in G. Members are pairs of components of P, {p(u,i),p(v,i+1)}, i=1, n-1. A doubly stochastic-like relaxed LP formulation of the Hamilton cycle decision problem is constructed. Each {p(u,i),p(v,i+1)} in E is coded as variable q(u,i,v,i+1)=0 i.e. shrinks the feasible region. We then implement the Weak Closure Algorithm (WCA) that tests necessary conditions of a matching, together with Boolean closure to decide 0/1 variable assignments. Each {p(u,i),p(v,j)} not in E is tested for membership in E, and if possible, added to E (q(u,i,v,j)=0) to iteratively maximize |E|. If the WCA constructs E to be maximal, the set of all {p(u,i),p(v,j)}, then G is decided non-Hamiltonian. Only non-Hamiltonian G share this maximal property. Ten non-Hamiltonian graphs (10 through 104 vertices) and 2000 randomized 31 vertex non-Hamiltonian graphs are tested and correctly decided non-Hamiltonian. For Hamiltonian G, the complement of E covers a matching, perhaps useful in searching for cycles. We also present an example where the WCA fails.

Keywords: Hamilton cycle decision problem, computational complexity theory, graph theory, theoretical computer science

Procedia PDF Downloads 376
13364 International Collaboration: Developing the Practice of Social Work Curriculum through Study Abroad and Participatory Research

Authors: Megan Lindsey

Abstract:

Background: Globalization presents international social work with both opportunities and challenges. Thus, the design of this international experience aligns with the three charges of the Commission on Global Social Work Education. An international collaborative effort between an American and Scottish University Social Work Program was based on an established University agreement. The presentation provides an overview of an international study abroad among American and Scottish Social Work students. Further, presenters will discuss the opportunities of international collaboration and the challenges of the project. First, we will discuss the process of a successful international collaboration. This discussion will include the planning, collaboration, execution of the experience, along with its application to the international field of social work. Second, we will discuss the development and implementation of participatory action research in which the student engage to enhance their learning experience. A collaborative qualitative research project was undertaken with three goals. First, students gained experience in Scottish social services, including agency visits and presentations. Second, a collaboration between American and Scottish MSW Students allowed the exchange of ideas and knowledge about services and social work education. Third, students collaborated on a qualitative research method to reflect on their social work education and the formation of their professional identity. Methods/Methodology: American and Scottish students engaged in participatory action research by using Photovoice methods while studying together in Scotland. The collaboration between faculty researchers framed a series of research questions. Both universities obtained IRB approval and trained students in Photovoice methods. The student teams used the research question and Photovoice method to discover images that represented their professional identity formation. Two Photovoice goals grounded the study's research question. First, the methods enabled the individual students to record and reflect on their professional strengths and concerns. Second, student teams promoted critical dialogue and knowledge about personal and professional issues through large and small group discussions of photographs. Results: The international participatory approach generated the ability for students to contextualize their common social work education and practice experiences. Team discussions between representatives of each country resulted in understanding professional identity formation and the processes of social work education that contribute to that identity. Students presented the photograph narration of their knowledge and understanding of international social work education and practice. Researchers then collaborated on finding common themes. The results found commonalities in the quality and depth of social work education. The themes found differences regarding how professional identity is formed. Students found great differences between their and American accreditation and certification. Conclusions: Faculty researchers’ collaboration themes sought to categorize the students’ experiences of their professional identity. While the social work education systems are similar, there are vast differences. The Scottish themes noted structures within American social work not found in the United Kingdom. The American researchers noted that Scotland, as does the United Kingdom, relies on programs, agencies, and the individual social worker to provide structure to identity formation. Other themes will be presented.

Keywords: higher education curriculum, international collaboration, social sciences, action research

Procedia PDF Downloads 132
13363 Constructing a Physics Guided Machine Learning Neural Network to Predict Tonal Noise Emitted by a Propeller

Authors: Arthur D. Wiedemann, Christopher Fuller, Kyle A. Pascioni

Abstract:

With the introduction of electric motors, small unmanned aerial vehicle designers have to consider trade-offs between acoustic noise and thrust generated. Currently, there are few low-computational tools available for predicting acoustic noise emitted by a propeller into the far-field. Artificial neural networks offer a highly non-linear and adaptive model for predicting isolated and interactive tonal noise. But neural networks require large data sets, exceeding practical considerations in modeling experimental results. A methodology known as physics guided machine learning has been applied in this study to reduce the required data set to train the network. After building and evaluating several neural networks, the best model is investigated to determine how the network successfully predicts the acoustic waveform. Lastly, a post-network transfer function is developed to remove discontinuity from the predicted waveform. Overall, methodologies from physics guided machine learning show a notable improvement in prediction performance, but additional loss functions are necessary for constructing predictive networks on small datasets.

Keywords: aeroacoustics, machine learning, propeller, rotor, neural network, physics guided machine learning

Procedia PDF Downloads 234
13362 Removal of Polycyclic Aromatic Hydrocarbons (PAHS) and the Response of Indigenous Bacteria in Highly Contaminated Aged Soil after Persulfate Oxidation

Authors: Yaling Gou, Sucai Yang, Pengwei Qiao

Abstract:

Integrated chemical-biological treatment is an attractive alternative to remove polycyclic aromatic hydrocarbons (PAHs) from contaminated soil; wherein indigenous bacteria is the key factor for the biodegradation of residual PAHs concentrations after the application of chemical oxidation. However, the systematical study on the impact of persulfate (PS) oxidation on indigenous bacteria as well as PAHs removal is still scarce. In this study, the influences of different PS dosages (1%, 3%, 6%, and 10% [w/w]), as well as various activation methods (native iron, H2O2, alkaline, ferrous iron, and heat) on PAHs removal and indigenous bacteria in highly contaminated aged soil were investigated. Apparent degradation of PAHs in the soil treated with PS oxidation was observed, and the removal efficiency of total PAHs in the soil ranged from 38.28% to 79.97%. The removal efficiency of total PAHs in the soil increased with increasing consumption of PS. However, the bacterial abundance in soil was negatively affected following oxidation for all of the treatments added with PS, with bacterial abundance in the soil decreased by 0.89~2.88 orders of magnitude compared to the untreated soil. Moreover, the number of total bacteria in the soil decreased as PS consumption increased. Different PS activation methods and PS dosages exhibited different influences on the bacterial community composition. Bacteria capable of degrading PAHs under anoxic conditions were composed predominantly by Proteobacteria and Firmicutes. The total amount of Proteobacteria and Firmicutes also decreased with increasing consumption of PS. The results of this study provide important insight into the design of PAHs contaminated soil remediation projects.

Keywords: activation method, chemical oxidation, indigenous bacteria, polycyclic aromatic hydrocarbon

Procedia PDF Downloads 121
13361 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 286
13360 Amplifying Sine Unit-Convolutional Neural Network: An Efficient Deep Architecture for Image Classification and Feature Visualizations

Authors: Jamshaid Ul Rahman, Faiza Makhdoom, Dianchen Lu

Abstract:

Activation functions play a decisive role in determining the capacity of Deep Neural Networks (DNNs) as they enable neural networks to capture inherent nonlinearities present in data fed to them. The prior research on activation functions primarily focused on the utility of monotonic or non-oscillatory functions, until Growing Cosine Unit (GCU) broke the taboo for a number of applications. In this paper, a Convolutional Neural Network (CNN) model named as ASU-CNN is proposed which utilizes recently designed activation function ASU across its layers. The effect of this non-monotonic and oscillatory function is inspected through feature map visualizations from different convolutional layers. The optimization of proposed network is offered by Adam with a fine-tuned adjustment of learning rate. The network achieved promising results on both training and testing data for the classification of CIFAR-10. The experimental results affirm the computational feasibility and efficacy of the proposed model for performing tasks related to the field of computer vision.

Keywords: amplifying sine unit, activation function, convolutional neural networks, oscillatory activation, image classification, CIFAR-10

Procedia PDF Downloads 115