Search results for: cognitive radio network
4088 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia
Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete
Abstract:
Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed
Procedia PDF Downloads 214087 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals
Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti
Abstract:
Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.Keywords: neuroinformatics, bioinformatics, network tools, brain mapping
Procedia PDF Downloads 1824086 Machine Learning Based Smart Beehive Monitoring System Without Internet
Authors: Esra Ece Var
Abstract:
Beekeeping plays essential role both in terms of agricultural yields and agricultural economy; they produce honey, wax, royal jelly, apitoxin, pollen, and propolis. Nowadays, these natural products become more importantly suitable and preferable for nutrition, food supplement, medicine, and industry. However, to produce organic honey, majority of the apiaries are located in remote or distant rural areas where utilities such as electricity and Internet network are not available. Additionally, due to colony failures, world honey production decreases year by year despite the increase in the number of beehives. The objective of this paper is to develop a smart beehive monitoring system for apiaries including those that do not have access to Internet network. In this context, temperature and humidity inside the beehive, and ambient temperature were measured with RFID sensors. Control center, where all sensor data was sent and stored at, has a GSM module used to warn the beekeeper via SMS when an anomaly is detected. Simultaneously, using the collected data, an unsupervised machine learning algorithm is used for detecting anomalies and calibrating the warning system. The results show that the smart beehive monitoring system can detect fatal anomalies up to 4 weeks prior to colony loss.Keywords: beekeeping, smart systems, machine learning, anomaly detection, apiculture
Procedia PDF Downloads 2394085 Transportation Mode Classification Using GPS Coordinates and Recurrent Neural Networks
Authors: Taylor Kolody, Farkhund Iqbal, Rabia Batool, Benjamin Fung, Mohammed Hussaeni, Saiqa Aleem
Abstract:
The rising threat of climate change has led to an increase in public awareness and care about our collective and individual environmental impact. A key component of this impact is our use of cars and other polluting forms of transportation, but it is often difficult for an individual to know how severe this impact is. While there are applications that offer this feedback, they require manual entry of what transportation mode was used for a given trip, which can be burdensome. In order to alleviate this shortcoming, a data from the 2016 TRIPlab datasets has been used to train a variety of machine learning models to automatically recognize the mode of transportation. The accuracy of 89.6% is achieved using single deep neural network model with Gated Recurrent Unit (GRU) architecture applied directly to trip data points over 4 primary classes, namely walking, public transit, car, and bike. These results are comparable in accuracy to results achieved by others using ensemble methods and require far less computation when classifying new trips. The lack of trip context data, e.g., bus routes, bike paths, etc., and the need for only a single set of weights make this an appropriate methodology for applications hoping to reach a broad demographic and have responsive feedback.Keywords: classification, gated recurrent unit, recurrent neural network, transportation
Procedia PDF Downloads 1374084 Analysis of Cardiovascular Diseases Using Artificial Neural Network
Authors: Jyotismita Talukdar
Abstract:
In this paper, a study has been made on the possibility and accuracy of early prediction of several Heart Disease using Artificial Neural Network. (ANN). The study has been made in both noise free environment and noisy environment. The data collected for this analysis are from five Hospitals. Around 1500 heart patient’s data has been collected and studied. The data is analysed and the results have been compared with the Doctor’s diagnosis. It is found that, in noise free environment, the accuracy varies from 74% to 92%and in noisy environment (2dB), the results of accuracy varies from 62% to 82%. In the present study, four basic attributes considered are Blood Pressure (BP), Fasting Blood Sugar (FBS), Thalach (THAL) and Cholesterol (CHOL.). It has been found that highest accuracy(93%), has been achieved in case of PPI( Post-Permanent-Pacemaker Implementation ), around 79% in case of CAD(Coronary Artery disease), 87% in DCM (Dilated Cardiomyopathy), 89% in case of RHD&MS(Rheumatic heart disease with Mitral Stenosis), 75 % in case of RBBB +LAFB (Right Bundle Branch Block + Left Anterior Fascicular Block), 72% for CHB(Complete Heart Block) etc. The lowest accuracy has been obtained in case of ICMP (Ischemic Cardiomyopathy), about 38% and AF( Atrial Fibrillation), about 60 to 62%.Keywords: coronary heart disease, chronic stable angina, sick sinus syndrome, cardiovascular disease, cholesterol, Thalach
Procedia PDF Downloads 1744083 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum
Authors: K. Durairaj, I. N. Umar
Abstract:
The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating that the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in a different groups aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.Keywords: asynchronous discussion forums, content analysis, knowledge construction, social network analysis
Procedia PDF Downloads 3734082 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 1074081 Modeling of Processes Running in Radical Clusters Formed by Ionizing Radiation with the Help of Continuous Petri Nets and Oxygen Effect
Authors: J. Barilla, M. Lokajíček, H. Pisaková, P. Simr
Abstract:
The final biological effect of ionizing particles may be influenced strongly by some chemical substances present in cells mainly in the case of low-LET radiation. The influence of oxygen may be particularly important because oxygen is always present in living cells. The corresponding processes are then running mainly in the chemical stage of radio biological mechanism. The radical clusters formed by densely ionizing ends of primary or secondary charged particles are mainly responsible for final biological effect. The damage effect depends then on radical concentration at a time when the cluster meets a DNA molecule. It may be strongly influenced by oxygen present in a cell as oxygen may act in different directions: at small concentration of it the interaction with hydrogen radicals prevails while at higher concentrations additional efficient oxygen radicals may be formed. The basic radical concentration in individual clusters diminishes, which is influenced by two parallel processes: chemical reactions and diffusion of corresponding clusters. The given simultaneous evolution may be modeled and analyzed well with the help of Continuous Petri nets. The influence of other substances present in cells during irradiation may be studied, too. Some results concerning the impact of oxygen content will be presented.Keywords: radiobiological mechanism, chemical phase, DSB formation, Petri nets
Procedia PDF Downloads 3124080 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 2254079 Effectiveness of Using Multiple Non-pharmacological Interventions to Prevent Delirium in the Hospitalized Elderly
Authors: Yi Shan Cheng, Ya Hui Yeh, Hsiao Wen Hsu
Abstract:
Delirium is an acute state of confusion, which is mainly the result of the interaction of many factors, including: age>65 years, comorbidity, cognitive function and visual/auditory impairment, dehydration, pain, sleep disorder, pipeline retention, general anesthesia and major surgery… etc. Researches show the prevalence of delirium in hospitalized elderly patients over 50%. If it doesn't improve in time, may cause cognitive decline or impairment, not only prolong the length of hospital stay but also increase mortality. Some studies have shown that multiple nonpharmacological interventions are the most effective and common strategies, which are reorientation, early mobility, promoting sleep and nutritional support (including water intake), could improve or prevent delirium in the hospitalized elderly. In Taiwan, only one research to compare the delirium incidence of the older patients who have received orthopedic surgery between multi-nonpharmacological interventions and general routine care. Therefore, the purpose of this study is to address the prevention or improvement of delirium incidence density in medical hospitalized elderly, provide clinical nurses as a reference for clinical implementation, and develop follow-up related research. This study is a quasi-experimental design using purposive sampling. Samples are from two wards: the geriatric ward and the general medicine ward at a medical center in central Taiwan. The sample size estimated at least 100, and then the data will be collected through a self-administered structured questionnaire, including: demographic and professional evaluation items. Case recruiting from 5/13/2023. The research results will be analyzed by SPSS for Windows 22.0 software, including descriptive statistics and inferential statistics: logistic regression、Generalized Estimating Equation(GEE)、multivariate analysis of variance(MANOVA).Keywords: multiple nonpharmacological interventions, hospitalized elderly, delirium incidence, delirium
Procedia PDF Downloads 784078 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing
Procedia PDF Downloads 2784077 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors
Authors: Katawut Kaewbanjong
Abstract:
We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.Keywords: prediction model, statistical analysis, software project, user satisfaction factor
Procedia PDF Downloads 1244076 A Game-Based Methodology to Discriminate Executive Function – a Pilot Study With Institutionalized Elderly People
Authors: Marlene Rosa, Susana Lopes
Abstract:
There are few studies that explore the potential of board games as a performance measure, despite it can be an interesting strategy in the context of frailty populations. In fact, board games are immersive strategies than can inhibit the pressure of being evaluated. This study aimed to test the ability of gamed-base strategies to assess executive function in elderly population. Sixteen old participants were included: 10 with affected executive functions (G1 – 85.30±6.00 yrs old; 10 male); 6 with executive functions with non-clinical important modifications (G2 - 76.30±5.19 yrs old; 6 male). Executive tests were assessed using the Frontal Assessment Battery (FAB), which is a quick-applicable cognitive screening test (score<12 means impairment). The board game used in this study was the TATI Hand Game, specifically for training rhythmic coordination of the upper limbs with multiple cognitive stimuli. This game features 1 table grid, 1 set of Single Game cards (to play with one hand); Double Game cards (to play simultaneously with two hands); 1 dice to plan Single Game mode; cards to plan the Double Game mode; 1 bell; 2 cups. Each participant played 3 single game cards, and the following data were collected: (i) variability in time during board game challenges (SD); (ii) number of errors; (iii) execution speed (sec). G1 demonstrated: high variability in execution time during board game challenges (G1 – 13.0s vs G2- 0.5s); a higher number of errors (1.40 vs 0.67); higher execution velocity (607.80s vs 281.83s). These results demonstrated the potential of implementing board games as a functional assessment strategy in geriatric care. Future studies might include larger samples and statistical methodologies to find cut-off values for impairment in executive functions during performance in TATI game.Keywords: board game, aging, executive function, evaluation
Procedia PDF Downloads 1424075 Scrum Challenges and Mitigation Practices in Global Software Development of an Integrated Learning Environment: Case Study of Science, Technology, Innovation, Mathematics, Engineering for the Young
Authors: Evgeniia Surkova, Manal Assaad, Hleb Makeyeu, Juho Makio
Abstract:
The main objective of STIMEY (Science, Technology, Innovation, Mathematics, Engineering for the Young) project is the delivery of a hybrid learning environment that combines multi-level components such as social media concepts, robotic artefacts, and radio, among others. It is based on a well-researched pedagogical framework to attract European youths to STEM (science, technology, engineering, and mathematics) education and careers. To develop and integrate these various components, STIMEY is executed in iterative research cycles leading to progressive improvements. Scrum was the development methodology of choice in the project, as studies indicated its benefits as an agile methodology in global software development, especially of e-learning and integrated learning projects. This paper describes the project partners’ experience with the Scrum framework, discussing the challenges faced in its implementation and the mitigation practices employed. The authors conclude with exploring user experience tools and principles for future research, as a novel direction in supporting the Scrum development team.Keywords: e-learning, global software development, scrum, STEM education
Procedia PDF Downloads 1794074 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 2114073 The Effect of Research Unit Clique-Diversity and Power Structure on Performance and Originality
Authors: Yue Yang, Qiang Wu, Xingyu Gao
Abstract:
"Organized research units" have always been an important part of academia. According to the type of organization, there are public research units, university research units, and corporate research units. Existing research has explored the research unit in some depth from several perspectives. However, there is a research gap on the closer interaction between the three from a network perspective and the impact of this interaction on their performance as well as originality. Cliques are a special kind of structure under the concept of cohesive subgroups in the field of social networks, representing particularly tightly knit teams in a network. This study develops the concepts of the diversity of clique types and the diversity of clique geography based on cliques, starting from the diversity of collaborative activities characterized by them. Taking research units as subjects and assigning values to their power in cliques based on occupational age, we explore the impact of clique diversity and clique power on their performance as well as originality and the moderating role of clique relationship strength and structural holes in them. By collecting 9094 articles published in the field of quantum communication at WoSCC over the 15 years 2007-2021, we processed them to construct annual collaborative networks between a total of 533 research units and measured the network characteristic variables using Ucinet. It was found that the type and geographic diversity of cliques promoted the performance and originality of the research units, and the strength of clique relationships positively moderated the positive effect of the diversity of clique types on performance and negatively affected the promotional relationship between the geographic diversity of cliques and performance. It also negatively affected the positive effects of clique-type diversity and clique-geography diversity on originality. Structural holes positively moderated the facilitating effect of both types of factional diversity on performance and originality. Clique power promoted the performance of the research unit, but unfavorably affected its performance on novelty. Faction relationship strength facilitated the relationship between faction rights and performance and showed negative insignificance for clique power and originality. Structural holes positively moderated the effect of clique power on performance and originality.Keywords: research unit, social networks, clique structure, clique power, diversity
Procedia PDF Downloads 594072 Systematic Study of Mutually Inclusive Influence of Temperature and Substitution on the Coordination Geometry of Co(II) in a Series of Coordination Polymer and Their Properties
Authors: Manasi Roy, Raju Mondal
Abstract:
During last two decades the synthesis and design of MOFs or novel coordination polymers (CPs) has flourished as an emerging area of research due to their role as functional materials. Accordingly, ten new cobalt-based MOFs have been synthesized using a simple bispyrazole ligand, 4,4′-methylene-bispyrazole (H2MBP), and isophthalic acid (H2IPA) and its four 5-substituted derivatives R-H2IPA (R = COOH, OH, tBu, NH2). The major aim of this study was to validate the mutual influence of temperature and substitutions on the final structural self-assembly. Five different isophthalic acid derivatives were used to study the influence of substituents while each reaction was carried out at two different temperatures to assess the temperature effect. A clear correlation was observed between the reaction temperature and the coordination number of the cobalt atoms which consequently changes the self assembly pattern. Another fact that the periodical change in coordination number did bring about some systematic changes in the structural network via secondary building unit selectivity. With the presence of a tunable cavity inside the network, and unsaturated metal centers, MOFs show highly encouraging photocatalytic degradation of toxic dye with a potential application in waste water purification. Another fascinating aspect of this work is the construction of magnetic coordination polymers with the occurrence of a not-so-common MCE behavior of cobalt-based MOF.Keywords: MOFs, temperature effect, MCE, dye degradation
Procedia PDF Downloads 1364071 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 794070 Production, Quality Control, and Biodistribution Studies of 141ce-Edtmp as a Potential Bone Pain Palliation Agent
Authors: Fatemeh Soltani, Simindokht Shirvani Arani, Ali Bahrami Samani, Mahdi Sadeghi, Kamal Yavari
Abstract:
Cerium-141 [T1/2 = 32.501 days, Eβ (max) = 0.580 (29.8%) and 0.435(70.2%) MeV, Eγ=145.44 (48.2%) keV] possesses radionuclidic properties suitable for use in palliative therapy of bone metastases. 141Ce also has gamma energy of 145.44 keV, which resembles that of 99mTc. Therefore, the energy window is adjustable on the Tc-99m energy because of imaging studies. 141Ce can be produced through a relatively easy route that involves thermal neutron bombardment on natural CeO2 in medium flux research reactors (4–5×1013 neutrons/cm2•s). The requirement for an enriched target does not arise. Ethylenediamine tetramethylene phosphonic acid (EDTMP) was synthesized and radiolabeled with 141Ce. Complexation parameters were optimized to achieve maximum yields (>99%). The radiochemical purity of 141Ce-EDTMP was evaluated by radio-thin layer chromatography. The stability of the prepared formulation was monitored for one week at room temperature, and results showed that the preparation was stable during this period (>99%). Biodistribution studies of the complexes carried out in wild-type rats exhibited significant bone uptake with rapid clearance from blood. The properties of produced 141Ce-EDTMP suggest applying a new efficient bone pain palliative therapeutic agent to overcome metastatic bone pains.Keywords: bone pain palliative, cerium-141, EDTMP, radiopharmaceutical
Procedia PDF Downloads 4894069 Neuropsychological Deficits in Drug-Resistant Epilepsy
Authors: Timea Harmath-Tánczos
Abstract:
Drug-resistant epilepsy (DRE) is defined as the persistence of seizures despite at least two syndrome-adapted antiseizure drugs (ASD) used at efficacious daily doses. About a third of patients with epilepsy suffer from drug resistance. Cognitive assessment has a crucial role in the diagnosis and clinical management of epilepsy. Previous studies have addressed the clinical targets and indications for measuring neuropsychological functions; best to our knowledge, no studies have examined it in a Hungarian therapy-resistant population. To fill this gap, we investigated the Hungarian diagnostic protocol between 18 and 65 years of age. This study aimed to describe and analyze neuropsychological functions in patients with drug-resistant epilepsy and identify factors associated with neuropsychology deficits. We perform a prospective case-control study comparing neuropsychological performances in 50 adult patients and 50 healthy individuals between March 2023 and July 2023. Neuropsychological functions were examined in both patients and controls using a full set of specific tests (general performance level, motor functions, attention, executive facts., verbal and visual memory, language, and visual-spatial functions). Potential risk factors for neuropsychological deficit were assessed in the patient group using a multivariate analysis. The two groups did not differ in age, sex, dominant hand and level of education. Compared with the control group, patients with drug-resistant epilepsy showed worse performance on motor functions and visuospatial memory, sustained attention, inhibition and verbal memory. Neuropsychological deficits could therefore be systematically detected in patients with drug-resistant epilepsy in order to provide neuropsychological therapy and improve quality of life. The analysis of the classical and complex indices of the special neuropsychological tasks presented in the presentation can help in the investigation of normal and disrupted memory and executive functions in the DRE.Keywords: drug-resistant epilepsy, Hungarian diagnostic protocol, memory, executive functions, cognitive neuropsychology
Procedia PDF Downloads 764068 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 824067 On-Chip Sensor Ellipse Distribution Method and Equivalent Mapping Technique for Real-Time Hardware Trojan Detection and Location
Authors: Longfei Wang, Selçuk Köse
Abstract:
Hardware Trojan becomes great concern as integrated circuit (IC) technology advances and not all manufacturing steps of an IC are accomplished within one company. Real-time hardware Trojan detection is proven to be a feasible way to detect randomly activated Trojans that cannot be detected at testing stage. On-chip sensors serve as a great candidate to implement real-time hardware Trojan detection, however, the optimization of on-chip sensors has not been thoroughly investigated and the location of Trojan has not been carefully explored. On-chip sensor ellipse distribution method and equivalent mapping technique are proposed based on the characteristics of on-chip power delivery network in this paper to address the optimization and distribution of on-chip sensors for real-time hardware Trojan detection as well as to estimate the location and current consumption of hardware Trojan. Simulation results verify that hardware Trojan activation can be effectively detected and the location of a hardware Trojan can be efficiently estimated with less than 5% error for a realistic power grid using our proposed methods. The proposed techniques therefore lay a solid foundation for isolation and even deactivation of hardware Trojans through accurate location of Trojans.Keywords: hardware trojan, on-chip sensor, power distribution network, power/ground noise
Procedia PDF Downloads 3914066 Climate Variability on Hydro-Energy Potential: An MCDM and Neural Network Approach
Authors: Apu Kumar Saha, Mrinmoy Majumder
Abstract:
The increase in the concentration of Green House gases all over the World has induced global warming phenomena whereby the average temperature of the world has aggravated to impact the pattern of climate in different regions. The frequency of extreme event has increased, early onset of season and change in an average amount of rainfall all are engrossing the conclusion that normal pattern of climate is changing. Sophisticated and complex models are prepared to estimate the future situation of the climate in different zones of the Earth. As hydro-energy is directly related to climatic parameters like rainfall and evaporation such energy resources will have to sustain the onset of the climatic abnormalities. The present investigation has tried to assess the impact of climatic abnormalities upon hydropower potential of different regions of the World. In this regard multi-criteria, decision making, and the neural network is used to predict the impact of the change cognitively by an index. The results from the study show that hydro-energy potential of Asian region is mostly vulnerable with respect to other regions of the world. The model results also encourage further application of the index to analyze the impact of climate change on the potential of hydro-energy.Keywords: hydro-energy potential, neural networks, multi criteria decision analysis, environmental and ecological engineering
Procedia PDF Downloads 5494065 The Need for Embodiment Perspectives and Somatic Methods in Social Work Curriculum: Lessons Learned from a Decade of Developing a Program to Support College Students Who Exited the State Foster Care System
Authors: Yvonne A. Unrau
Abstract:
Social work education is a competency-based curriculum that relies mostly on cognitive frameworks and problem-solving models. Absent from the curriculum is knowledge and skills that draw from an embodiment perspective, especially somatic practice methods. Embodiment broadly encompasses the understanding that biological, political, historical, and social factors impact human development via changes to the nervous system. In the past 20 years, research has well-established that unresolved traumatic events, especially during childhood, negatively impacts long-term health and well-being. Furthermore, traumatic stress compromises cognitive processing and activates reflexive action such as ‘fight’ or ‘flight,’ which are the focus of somatic methods. The main objective of this paper is to show how embodiment perspectives and somatic methods can enhance social work practice overall. Using an exploratory approach, the author shares a decade-long journey that involved creating an education-support program for college students who exited the state foster care system. Personal experience, program outcomes and case study narratives revealed that ‘classical’ social work methods were insufficient to fully address the complex needs of college students who were living with complex traumatic stressors. The paper chronicles select case study scenarios and key program development milestones over a 10-year period to show the benefit of incorporating embodiment perspectives in social work practice. The lessons reveal that there is an immediate need for social work curriculum to include embodiment perspectives so that social workers may be equipped to respond competently to their many clients who live with unresolved trauma.Keywords: social work practice, social work curriculum, embodiment, traumatic stress
Procedia PDF Downloads 1234064 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas
Authors: Ahmet Kayabasi, Ali Akdagli
Abstract:
In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)
Procedia PDF Downloads 4414063 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition
Authors: Tamir Michal
Abstract:
The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development
Procedia PDF Downloads 3504062 The Role of Brooding and Reflective as Subtypes of Rumination toward Psychological Distress in University of Indonesia First-Year Undergraduate Students
Authors: Hepinda Fajari Nuharini, Sugiarti A. Musabiq
Abstract:
Background: Various and continuous pressures that exceed individual resources can cause first-year undergraduate college students to experience psychological distress. Psychological distress can occur when individuals use rumination as cognitive coping strategies. Rumination is one of the cognitive coping strategies that can be used by individuals to respond to psychological distress that causes individuals to think about the causes and consequences of events that have occurred. Rumination had two subtypes, such as brooding and reflective. Therefore, the purpose of this study was determining the role of brooding and reflective as subtypes of rumination toward psychological distress in University of Indonesia first-year undergraduate students. Methods: Participants of this study were 403 University of Indonesia first-year undergraduate students aged between 18 and 21 years old. Psychological distress measured using self reporting questionnaire (SRQ-20) and brooding and reflective as subtypes of rumination measured using Ruminative Response Scale - Short Version (RRS - Short Version). Results: Binary logistic regression analyses showed that 22.8% of the variation in psychological distress could be explained by the brooding and reflective as subtypes of rumination, while 77.2% of the variation in psychological distress could be explained by other factors (Nagelkerke R² = 0,228). The results of the binary logistic regression analysis also showed rumination subtype brooding is a significant predictor of psychological distress (b = 0,306; p < 0.05), whereas rumination subtype reflective is not a significant predictor of psychological distress (b = 0,073; p > 0.05). Conclusion: The findings of this study showed a positive relationship between brooding and psychological distress indicates that a higher level of brooding will predict higher psychological distress. Meanwhile, a negative relationship between reflective and psychological distress indicates a higher level of reflective will predict lower psychological distress in University of Indonesia first-year undergraduate students. Added Values: The psychological distress among first-year undergraduate students would then have an impact on student academic performance. Therefore, the results of this study can be used as a reference for making preventive action to reduce the percentage and impact of psychological distress among first-year undergraduate students.Keywords: brooding as subtypes of rumination, first-year undergraduate students, psychological distress, reflective as subtypes of rumination
Procedia PDF Downloads 1084061 Digital Rehabilitation for Navigation Impairment
Authors: Milan N. A. Van Der Kuil, Anne M. A. Visser-Meily, Andrea W. M. Evers, Ineke J. M. Van Der Ham
Abstract:
Navigation ability is essential for autonomy and mobility in daily life. In patients with acquired brain injury, navigation impairment is frequently impaired; however, in this study, we tested the effectiveness of a serious gaming training protocol as a tool for cognitive rehabilitation to reduce navigation impairment. In total, 38 patients with acquired brain injury and subjective navigation complaints completed the experiment, with a partially blind, randomized control trial design. An objective navigation test was used to construct a strengths and weaknesses profile for each patient. Subsequently, patients received personalized compensation training that matched their strengths and weaknesses by addressing an egocentric or allocentric strategy or a strategy aimed at minimizing the use of landmarks. Participants in the experimental condition received psychoeducation and a home-based rehabilitation game with a series of exercises (e.g., map reading, place finding, and turn memorization). The exercises were developed to stimulate the adoption of more beneficial strategies, according to the compensatory approach. Self-reported navigation ability (wayfinding questionnaire), participation level, and objective navigation performance were measured before and after 1 and 4 weeks after completing the six-week training program. Results indicate that the experimental group significantly improved in subjective navigation ability both 1 and 4 weeks after completion of the training, in comparison to the score before training and the scores of the control group. Similarly, goal attainment showed a significant increase after the first and fourth week after training. Objective navigation performance was not affected by the training. This navigation training protocol provides an effective solution to address navigation impairment after acquired brain injury, with clear improvements in subjective performance and goal attainment of the participants. The outcomes of the training should be re-examined after implementation in a clinical setting.Keywords: spatial navigation, cognitive rehabilitation, serious gaming, acquired brain injury
Procedia PDF Downloads 1764060 New Methods to Acquire Grammatical Skills in A Foreign Language
Authors: Indu ray
Abstract:
In today’s digital world the internet is already flooded with information on how to master grammar in a foreign language. It is well known that one cannot master a language without grammar. Grammar is the backbone of any language. Without grammar there would be no structure to help you speak/write or listen/read. Successful communication is only possible if the form and function of linguistic utterances are firmly related to one another. Grammar has its own rules of use to formulate an easier-to-understand language. Like a tool, grammar formulates our thoughts and knowledge in a meaningful way. Every language has its own grammar. With grammar, we can quickly analyze whether there is any action in this text: (Present, past, future). Knowledge of grammar is an important prerequisite for mastering a foreign language. What’s most important is how teachers can make grammar lessons more interesting for students and thus promote grammar skills more successfully. Through this paper, we discuss a few important methods like (Interactive Grammar Exercises between students, Interactive Grammar Exercise between student to teacher, Grammar translation method, Audio -Visual Method, Deductive Method, Inductive Method). This paper is divided into two sections. In the first part, brief definitions and principles of these approaches will be provided. Then the possibility and the case of combination of this approach will be analyzed. In the last section of the paper, I would like to present a survey result conducted at my university on a few methods to quickly learn grammar in Foreign Language. We divided the Grammatical Skills in six Parts. 1.Grammatical Competence 2. Speaking Skills 3. Phonology 4. The syntax and the Semantics 5. Rule 6. Cognitive Function and conducted a survey among students. From our survey results, we can observe that phonology, speaking ability, syntax and semantics can be improved by inductive method, Audio-visual Method, and grammatical translation method, for grammar rules and cognitive functions we should choose IGE (teacher-student) method. and the IGE method (pupil-pupil). The study’s findings revealed, that the teacher delivery Methods should be blend or fusion based on the content of the Grammar.Keywords: innovative method, grammatical skills, audio-visual, translation
Procedia PDF Downloads 774059 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 257