Search results for: pervasive computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1125

Search results for: pervasive computing

705 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 68
704 The Triple Threat: Microplastic, Nanoplastic, and Macroplastic Pollution and Their Cumulative Impacts on Marine Ecosystem

Authors: Tabugbo B. Ifeyinwa, Josephat O. Ogbuagu, Okeke A. Princewill, Victor C. Eze

Abstract:

The increasing amount of plastic pollution in maritime settings poses a substantial risk to the functioning of ecosystems and the preservation of biodiversity. This comprehensive analysis combines the most recent data on the environmental effects of pollution from macroplastics, microplastics, and nanoplastics within marine ecosystems. Our goal is to provide a comprehensive understanding of the cumulative impacts that plastic waste accumulates on marine life by outlining the origins, processes, and ecological repercussions connected with each size category of plastic debris. Microplastics and nanoplastics have more sneaky effects that are controlled by chemicals. These effects can get through biological barriers and affect the health of cells and the whole body. Compared to macroplastics, which primarily contribute to physical harm through entanglement and ingestion by marine fauna, microplastics, and nanoplastics are associated with non-physical effects. The review underlines a vital need for research that crosses disciplinary boundaries to untangle the intricate interactions that the various sizes of plastic pollution have with marine animals, evaluate the long-term ecological repercussions, and identify effective measures for mitigating the effects of plastic pollution. Additionally, we urge governmental interventions and worldwide cooperation to solve this pervasive environmental concern. Specifically, we identify significant knowledge gaps in the detection and effect assessment of nanoplastics. To protect marine biodiversity and preserve ecosystem services, this review highlights how urgent it is to address the broad spectrum of plastic pollution.

Keywords: macroplastic pollution, marine ecosystem, microplastic pollution, nanoplastic pollution

Procedia PDF Downloads 63
703 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 204
702 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern

Authors: Shutchapol Chopvitayakun

Abstract:

Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.

Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)

Procedia PDF Downloads 311
701 Estimating Heavy Metal Leakage and Environmental Damage from Cigarette Butt Disposal in Urban Areas through CBPI Evaluation

Authors: Muhammad Faisal, Zai-Jin You, Muhammad Naeem

Abstract:

Concerns about the environment, public health, and the economy are raised by the fact that the world produces around 6 trillion cigarettes annually. Arguably the most pervasive forms of environmental litter, this dangerous trash must be eliminated. The researchers wanted to get an idea of how much pollution is seeping out of cigarette butts in metropolitan areas by studying their distribution and concentration. In order to accomplish this goal, the cigarette butt pollution indicator was applied in 29 different areas. The locations were monitored monthly for a full calendar year. The conditions for conducting the investigation of the venues were the same on both weekends and during the weekdays. By averaging the metal leakage ratio in various climates and the average weight of cigarette butts, we were able to estimate the total amount of heavy metal leakage. The findings revealed that the annual average value of the index for the areas that were investigated ranged from 1.38 to 10.4. According to these numbers, just 27.5% of the areas had a low pollution rating, while 43.5% had a major pollution status or worse. Weekends witnessed the largest fall (31% on average) in all locations' indices, while spring and summer saw the largest increase (26% on average) compared to autumn and winter. It was calculated that the average amount of heavy metals such as Cr, Cu, Cd, Zn, and Pb that seep into the environment from discarded cigarette butts in commercial, residential, and park areas, respectively, is 0.25 µg/m2, 0.078 µg/m2, and 0.18 µg/m2. Butt from cigarettes is one of the most prevalent forms of litter in the area that was examined. This litter is the origin of a wide variety of contaminants, including heavy metals. This toxic garbage poses a significant risk to the city.

Keywords: heavy metal, hazardous waste, waste management, litter

Procedia PDF Downloads 75
700 A Rationale to Describe Ambident Reactivity

Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken

Abstract:

An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.

Keywords: ambident, Gibbs, nucleophile, rates

Procedia PDF Downloads 78
699 Porphyry Cu-Mo-(Au) Mineralization at Paraga Area, Nakhchivan District, Azerbaijan: Evidence from Mineral Paragenesis, Hyrothermal Alteration and Geochemical Studies

Authors: M. Kumral, A. Abdelnasser, M. Budakoglu, M. Karaman, D. K. Yildirim, Z. Doner, A. Bostanci

Abstract:

The Paraga area is located at the extreme eastern part of Nakhchivan district at the boundary with Armenia. The field study is situated at Ordubad region placed in 9 km from Paraga village and stays at 2300-2800 m height over sea level. It lies within a region of low-grade metamorphic porphyritic volcanic and plutonic rocks. The detailed field studies revealed that this area composed mainly of metagabbro-diorite intrusive rocks with porphyritic character emplaced into meta-andesitic rocks. This complex is later intruded by unmapped olivine gabbroic rocks. The Cu-Mo-(Au) mineralization at Paraga deposit is vein-type mineralization that is essentially related to quartz veins stockwork which cut the dioritic rocks and concentrated at the eastern and northeastern parts of the area with different directions N80W, N25W, N70E and N45E. Also, this mineralization is associated with two shearing zones directed N75W and N15E. The host porphyritic rocks were affected by intense sulfidation, carbonatization, sericitization and silicification with pervasive hematitic alterations accompanied with mineralized quartz veins and quartz-carbonate veins. Sulfide minerals which are chalcopyrite, pyrite, arsenopyrite and sphalerite occurred in two cases either inside these mineralized quartz veins or disseminated in the highly altered rocks as well as molybdenite and also at the peripheries between the altered host rock and veins. Gold found as inclusion disseminated in arsenopyrite and pyrite as well as in their cracks.

Keywords: porphyry Cu-Mo-(Au), Paraga area, Nakhchivan, Azerbaijan, paragenesis, hyrothermal alteration

Procedia PDF Downloads 403
698 Understanding Student Engagement through Sentiment Analytics of Response Times to Electronically Shared Feedback

Authors: Yaxin Bi, Peter Nicholl

Abstract:

The rapid advancement of Information and communication technologies (ICT) is extremely influencing every aspect of Higher Education. It has transformed traditional teaching, learning, assessment and feedback into a new era of Digital Education. This also introduces many challenges in capturing and understanding student engagement with their studies in Higher Education. The School of Computing at Ulster University has developed a Feedback And Notification (FAN) Online tool that has been used to send students links to personalized feedback on their submitted assessments and record students’ frequency of review of the shared feedback as well as the speed of collection. The feedback that the students initially receive is via a personal email directing them through to the feedback via a URL link that maps to the feedback created by the academic marker. This feedback is typically a Word or PDF report including comments and the final mark for the work submitted approximately three weeks before. When the student clicks on the link, the student’s personal feedback is viewable in the browser and they can view the contents. The FAN tool provides the academic marker with a report that includes when and how often a student viewed the feedback via the link. This paper presents an investigation into student engagement through analyzing the interaction timestamps and frequency of review by the student. We have proposed an approach to modeling interaction timestamps and use sentiment classification techniques to analyze the data collected over the last five years for a set of modules. The data studied is across a number of final years and second-year modules in the School of Computing. The paper presents the details of quantitative analysis methods and describes further their interactions with the feedback overtime on each module studied. We have projected the students into different groups of engagement based on sentiment analysis results and then provide a suggestion of early targeted intervention for the set of students seen to be under-performing via our proposed model.

Keywords: feedback, engagement, interaction modelling, sentiment analysis

Procedia PDF Downloads 98
697 Increasing Employee Productivity and Work Well-Being by Employing Affective Decision Support and a Knowledge-Based System

Authors: Loreta Kaklauskiene, Arturas Kaklauskas

Abstract:

This employee productivity and work well-being effective system aims to maximise the work performance of personnel and boost well-being in offices. Affective computing, decision support, and knowledge-based systems were used in our research. The basis of this effective system is our European Patent application (No: EP 4 020 134 A1) and two Lithuanian patents (LT 6841, LT 6866). Our study examines ways to support efficient employee productivity and well-being by employing mass-customised, personalised office environment. Efficient employee performance and well-being are managed by changing mass-customised office environment factors such as air pollution levels, humidity, temperature, data, information, knowledge, activities, lighting colours and intensity, scents, media, games, videos, music, and vibrations. These aspects of management generate a customised, adaptive environment for users taking into account their emotional, affective, and physiological (MAP) states measured and fed into the system. This research aims to develop an innovative method and system which would analyse, customise and manage a personalised office environment according to a specific user’s MAP states in a cohesive manner. Various values of work spaces (e.g., employee utilitarian, hedonic, perceived values) are also established throughout this process, based on the measurements that describe MAP states and other aspects related to the office environment. The main contribution of our research is the development of a real-time mass-customised office environment to boost employee performance and well-being. Acknowledgment: This work was supported by Project No. 2020-1-LT01-KA203-078100 “Minimizing the influence of coronavirus in a built environment” (MICROBE) from the European Union’s Erasmus + program.

Keywords: effective decision support and a knowledge-based system, human resource management, employee productivity and work well-being, affective computing

Procedia PDF Downloads 96
696 Lying Decreases Relying: Deceiver's Distrust in Online Restaurant Reviews

Authors: Jenna Barriault, Reeshma Haji

Abstract:

Online consumer behaviourand reliance on online reviews may be more pervasive than ever, andthis necessitates a better scientific understanding of the widespread phenomenon of online deception. The present research focuses on the understudied topic of deceiver’s distrust, where those who engage in deception later have less trust in others in the context of online restaurant reviews. The purpose was to examine deception and valence in online restaurant reviews and the effects they had on deceiver’s distrust. Undergraduate university students (N = 76) completed an online study where valence was uniquely manipulated by telling participants that either positive (or negative reviews) were influential and asking them to write a correspondingly valenced review. Deception was manipulated in the same task. Participants in the deception condition were asked to write an online restaurant review that was counter to their actual experience of the restaurant (negative review of a restaurant they liked, positive review of the restaurant they did not like). In the no deception condition, participants were asked to write a review that they actually liked or didn’t like (based on the valence condition to which they were randomly assigned). Participants’ trust was then assessed through various measures, includingfuture reliance on online reviews. There was a main effect of deception on reliance on online reviews. Consistent with deceiver’s distrust, those who deceived reported that they would rely less on online reviews. This study demonstrates that even when participants are induced to write a deceptive review, it can result in deceiver’s distrust, thereby lowering their trust in online reviews. If trust or reliance can be altered through deception in online reviews, people may start questioning the objectivity or true representation of a company based on such reviews. A primary implication is that people may reduce theirreliance upon online reviews if they know they are easily subject to manipulation. The findings of this study also contribute to the limited research regarding deceiver’s distrust in an online context, and further research is clarifying the specific conditions in which it is most likely to occur.

Keywords: deceiver’s distrust, deception, online reviews, trust, valence

Procedia PDF Downloads 115
695 The Fefe Indices: The Direction of Donal Trump’s Tweets Effect on the Stock Market

Authors: Sergio Andres Rojas, Julian Benavides Franco, Juan Tomas Sayago

Abstract:

An increasing amount of research demonstrates how market mood affects financial markets, but their primary goal is to demonstrate how Trump's tweets impacted US interest rate volatility. Following that lead, this work evaluates the effect that Trump's tweets had during his presidency on local and international stock markets, considering not just volatility but the direction of the movement. Three indexes for Trump's tweets were created relating his activity with movements in the S&P500 using natural language analysis and machine learning algorithms. The indexes consider Trump's tweet activity and the positive or negative market sentiment they might inspire. The first explores the relationship between tweets generating negative movements in the S&P500; the second explores positive movements, while the third explores the difference between up and down movements. A pseudo-investment strategy using the indexes produced statistically significant above-average abnormal returns. The findings also showed that the pseudo strategy generated a higher return in the local market if applied to intraday data. However, only a negative market sentiment caused this effect on daily data. These results suggest that the market reacted primarily to a negative idea reflected in the negative index. In the international market, it is not possible to identify a pervasive effect. A rolling window regression model was also performed. The result shows that the impact on the local and international markets is heterogeneous, time-changing, and differentiated for the market sentiment. However, the negative sentiment was more prone to have a significant correlation most of the time.

Keywords: market sentiment, Twitter market sentiment, machine learning, natural dialect analysis

Procedia PDF Downloads 60
694 Bilateral Thalamic Hypodense Lesions in Computing Tomography

Authors: Angelis P. Barlampas

Abstract:

Purpose of Learning Objective: This case depicts the need for cooperation between the emergency department and the radiologist to achieve the best diagnostic result for the patient. The clinical picture must correlate well with the radiology report and when it does not, this is not necessarily someone’s fault. Careful interpretation and good knowledge of the limitations, advantages and disadvantages of each imaging procedure are essential for the final diagnostic goal. Methods or Background: A patient was brought to the emergency department by their relatives. He was suddenly confused and his mental status was altered. He hadn't any history of mental illness and was otherwise healthy. A computing tomography scan without contrast was done, but it was unremarkable. Because of high clinical suspicion of probable neurologic disease, he was admitted to the hospital. Results or Findings: Another T was done after 48 hours. It showed a hypodense region in both thalamic areas. Taking into account that the first CT was normal, but the initial clinical picture of the patient was alerting of something wrong, the repetitive CT exam is highly suggestive of a probable diagnosis of bilateral thalamic infractions. Differential diagnosis: Primary bilateral thalamic glioma, Wernicke encephalopathy, osmotic myelinolysis, Fabry disease, Wilson disease, Leigh disease, West Nile encephalitis, Greutzfeldt Jacob disease, top of the basilar syndrome, deep venous thrombosis, mild to moderate cerebral hypotension, posterior reversible encephalopathy syndrome, Neurofibromatosis type 1. Conclusion: As is the case of limitations for any imaging procedure, the same applies to CT. The acute ischemic attack can not depict on CT. A period of 24 to 48 hours has to elapse before any abnormality can be seen. So, despite the fact that there are no obvious findings of an ischemic episode, like paresis or imiparesis, one must be careful not to attribute the patient’s clinical signs to other conditions, such as toxic effects, metabolic disorders, psychiatric symptoms, etc. Further investigation with MRI or at least a repeated CT must be done.

Keywords: CNS, CT, thalamus, emergency department

Procedia PDF Downloads 109
693 Comparative Study to Evaluate the Efficacy of Control Criterion in Determining Consolidation Scope in the Public Sector

Authors: Batool Zarei

Abstract:

This study aims to answer this question whether control criterion with two elements of power and benefit which is introduced as 'control criterion of consolidation scope' in national and international standards of accounting in public sector (and also private sector) is efficient enough or not. The methodology of this study is comparative and the results of this research are significantly generalizable, due to the given importance to the sample of countries which were studied. Findings of this study states that in spite of pervasive use of control criterion (including 2 elements of power and benefit), criteria for determining the existence of control in public sector accounting standards, are not efficient enough to determine the consolidation scope of whole of government financial statements in a way that meet decision making and accountability needs of managers, policy makers and supervisors; specially parliament. Therefore, the researcher believes that for determining consolidation scope in public sector, in addition to economic view, it is better to pay attention to budgetary, legal and statistical concepts and also to practical and financial risk and define indicators for proving the existence of control (power and benefit) which include accountability relationships (budgetary relation, legal form and nature of activity). these findings also reveals the necessity of passing a comprehensive public financial management (PFM) legislation in order to redefine the characteristics of public sector entities and whole of government financial statements scope and review Statistics organizations and central banks duties for preparing government financial statistics and national accounts in order to achieve sustainable development and resilient economy goals.

Keywords: control, consolidation scope, public sector accounting, government financial statistics, resilient economy

Procedia PDF Downloads 255
692 Effect of a Polyherbal Gut Therapy Protocol in Changes of Gut and Behavioral Symptoms of Antibiotic Induced Dysbiosis of Autistic Babies

Authors: Dinesh K. S., D. R. C. V. Jayadevan

Abstract:

Autism is the most prevalent of a subset of the disorders organized under the umbrella of pervasive developmental disorders. After the publication of Andrew Wakefield's paper in lancet, many critiques deny this connection even without looking in to the matter. The British Medical Journal even put an editorial regarding this issue. BMJ 2010; 340:c1807. But ayurveda has ample of evidences to believe this connectivity. Dysbiosis, yeast growth of the gut, nutritional deficiencies, enzyme deficiencies, essential fatty acid deficiencies, Gastro esophageal reflux disease, indigestion, inflammatory bowel, chronic constipation & its cascade are few of them to note. The purpose of this paper is to present the observed changes in the behavioural symptoms of autistic babies after a gut management protocol which is a usual programme of our autism treatment plan especially after dysbiotic changes after antibiotic administration. Is there any correlation between changes (if significant) in gut symptoms and behavioral problems of autistic babies especially after a dysbiosis induced by antibiotics. Retrospective analysis of the case sheets of autistic patients admitted in Vaidyaratnam P.S.Varier Ayurveda College hospital, kottakkal,kerala, india from September 2010 are taken for the data processing. Autistic patients are used to come to this hospital as a part of their usual course of treatment. We investigated 40 cases diagnosed as autistic by clinical psychologists from different institutions who had dysbiosis induced by antibiotics. Significant change in gut symptoms before and after treatment p<0.05 in most of its components Significant change in behavioral symptoms before and after treatments p<0.05 in most of the components Correlation between gut symptoms change and behavioral symptoms changes after treatment is + 0.86. Conclusion : Selected Polyherbal Ayurveda treatment has significant role to play to make changes abnormal behaviors in autistic babies and has a positive correlation with changes in gut symptoms induced by dysbiosis of antibiotic intake.

Keywords: ayurveda, autism, dysbiosis, antibiotic

Procedia PDF Downloads 624
691 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 508
690 Disabling Barriers to Community Participation in Everyday Environments from the Perspective of People with Disabilities

Authors: Leah Samples

Abstract:

Barriers to participation persist for people with disabilities despite a long history of legislation designed to support equal opportunity for people with disabilities. Historically, the focus has been solely placed on structural barriers, but newer research highlights the importance of looking at social and informational barriers to participation. Collectively, these barriers prevent people with disabilities from fully engaging in community life and consequently from achieving full citizenship. Disability is crucial to understanding the meaning of citizenship. Drawing upon the influences of feminist, critical race and human rights theorists, citizenship can be defined as a set of rights and responsibilities that an individual has because they are a part of a community. However, when those rights are taken away or denied one’s citizenship is in question. Employing this definition of citizenship allows one to examine how barriers to citizenship present themselves in societies that are built on an ideal of a non-disabled person. To understand at a deeper level how this notion of citizenship manifests itself, this study seeks to unearth commonly experienced barriers to participation in the lives of visually-impaired adults in everyday environments. The purpose of this qualitative study is to explore commonly-experienced barriers to participation in the lives of visually impaired adults in leisure settings (e.g. restaurants, stores, etc.). Thirty adults with visual impairments participated in semi-structured interviews, as well as participant observations. The results suggest that barriers to participation are still pervasive in everyday environments and subsequently have an adverse effect on participation and belonging for people with visual impairments. This study highlights the importance of exploring and acknowledging the daily tensions that persons with disabilities face in their communities. A full exploration of these tensions is necessary in order to develop solutions and tools to create more just communities for everyone.

Keywords: barriers, citizenship, belonging, everyday environments

Procedia PDF Downloads 407
689 The Correlation Between Self-Talk and COVID-19

Authors: Abigail Vallance

Abstract:

Current research shows a correlation between declining mental health in the United States and the effect of COVID-19 on young adults and adolescents. Anxiety and depression are the two most common psychiatric illnesses, which are also the leading impediments to academic success. Spending six hours a day or more using computers is associated with higher risks of depression, with this time constraint pervasive even in present-day academia. Along with many hours on the computer, common issues COVID-19 had on students’ academic performance during online school included technical difficulties, poor support services, and difficulty adapting to online learning. Given the volume of requirements with unrealistic deadlines, and despite experiencing COVID-19, students showed an increase in their levels of anxiety. Besides the prevalent effect of COVID-19 on mental health, many studies show a correlation between mental health, COVID-19, academia, and sports performance. Academic research showed that negative self-talk, in relation to one’s self-efficacy, correlated with negative academic performance. Research showed that students who reported negative self-efficacy when test-taking led to negative test results. Furthermore, in sports performance, negative effects were found when athletes engage in negative self-talk. Overall, motivational self-talk, by oneself and through teammates and coaches, correlated with better performance than regular self-talk in sports. In relation to sports performance, the COVID-19 pandemic canceled complete sports seasons for millions of adolescents across the country. Many student-athletes use their sport to release emotions and escape from their mental health, but this was taken away. The purpose of this study is to address the current increase in mental health diagnoses in adolescents, including suicide rates after the COVID-19 pandemic began in 2020.This literature analysis is actively being studied.

Keywords: self-talk, COVID-19, mental health, adolescents

Procedia PDF Downloads 51
688 Electronic Government around the World: Key Information and Communication Technology Indicators

Authors: Isaac Kofi Mensah

Abstract:

Governments around the world are adopting Information and Communication Technologies (ICTs) because of the important opportunities it provides through E-government (EG) to modernize government public administration processes and delivery of quality and efficient public services. Almost every country in the world is adopting ICT in its public sector administration (EG) to modernize and change the traditional process of government, increase citizen engagement and participation in governance, as well as the provision of timely information to citizens. This paper, therefore, seeks to present the adoption, development and implementation of EG in regions globally, as well as the ICT indicators around the world, which are making EG initiatives successful. Europe leads the world in its EG adoption and development index, followed by the Americas, Asia, Oceania and Africa. There is a gradual growth in ICT indicators in terms of the increase in Internet access and usage, increase in broadband penetration, an increase of individuals using the Internet at home and a decline in fixed telephone use, while the mobile cellular phone has been on the increase year-on-year. Though the lack of ICT infrastructure is a major challenge to EG adoption and implementation around the world, in Africa it is very pervasive, hampering the expansion of Internet access and provision of broadband, and hence is a barrier to the successful adoption, development, and implementation of EG initiatives in countries on the continent. But with the general improvement and increase in ICT indicators around the world, it provides countries in Europe, Americas, Asia, Arab States, Oceania and Africa with the huge opportunity to enhance public service delivery through the adoption of EG. Countries within these regions cannot fail their citizens who desire to enjoy an enhanced and efficient public service delivery from government and its many state institutions.

Keywords: e-government development index, e-government, indicators, information and communication technologies (ICTs)

Procedia PDF Downloads 296
687 On Adaptive and Auto-Configurable Apps

Authors: Prisa Damrongsiri, Kittinan Pongpianskul, Mario Kubek, Herwig Unger

Abstract:

Apps are today the most important possibility to adapt mobile phones and computers to fulfill the special needs of their users. Location- and context-sensitive programs are hereby the key to support the interaction of the user with his/her environment and also to avoid an overload with a plenty of dispensable information. The contribution shows, how a trusted, secure and really bi-directional communication and interaction among users and their environment can be established and used, e.g. in the field of home automation.

Keywords: apps, context-sensitive, location-sensitive, self-configuration, mobile computing, smart home

Procedia PDF Downloads 393
686 The Role of Mass Sport Guidance in the Health Service Industry of China

Authors: Qiu Jian-Rong, Li Qing-Hui, Zhan Dong, Zhang Lei

Abstract:

Facing the problem of the demand of economic restructuring and risk of social economy stagnation due to the ageing of population, the Health Service Industry will play a very important role in the structure of industry in the future. During the process, the orient of Chinese sports medicine as well as the joint with preventive medicine, and the integration with data bank and cloud computing will be involved.

Keywords: China, the health service industry, mass sport, data bank

Procedia PDF Downloads 622
685 Examining Neo-colonialism and Power in Global Surgical Missions: An Historical, Practical and Ethical Analysis

Authors: Alex Knighton, Roba Khundkar, Michael Dunn

Abstract:

Neo-colonialism is defined as the use of economic, political, cultural, or other pressures to control or influence other countries, especially former dependencies, and concerns have been raised about its presence in surgical missions. Surgical missions aim to rectify the huge disparity in surgical access worldwide, but their ethics must be carefully considered. This is especially in light of colonial history which affects international relations and global health today, to ensure that colonial attitudes are not influencing efforts to promote equity. This review examines the history of colonial global health, demonstrating that global health initiatives have consistently been used to benefit those providing them, and then asks whether elements of colonialism are still pervasive in surgical missions today. Data was collected from the literature using specified search terms and snowball searching, as well as from international expert web-based conferences on global surgery ethics. A thematic analysis was then conducted on this data, resulting in the identification of six themes which are identifiable in both past and present global health initiatives. These six themes are power, lack of understanding or respect, feelings of superiority, exploitation, enabling of dependency, and acceptance of poorer standards of care. An ethical analysis follows, concluding that the concerns of power and neo-colonialism in global surgery would be addressed by adopting a framework of procedural justice that promotes a refined governance process in which stakeholders are able to propose and reject decisions that affect them. The paper argues that adopting this model would address concerns of the power disparity in the field directly, as well as promoting an ethical framework to enable the other concerns of power disparity and neo-colonialism identified in the present analysis to be addressed.

Keywords: medical ethics, global surgery, global health, neocolonialism, surgical missions

Procedia PDF Downloads 93
684 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 379
683 Silent Struggles: Unveiling Linguistic Insights into Poverty in Ancient Egypt

Authors: Hossam Mohammed Abdelfattah

Abstract:

In ancient Egypt, poverty, recognized as the foremost challenge, was extensively addressed in teachings, wisdom, and literary texts. These sources vividly depicted the suffering of a class deprived of life's pleasures. The ancient Egyptian language evolved to introduce terms reflecting poverty and hunger, underscoring the society's commitment to acknowledging and cautioning against this prevalent issue. Among the notable expressions, iwty.f emerged during the Middle Kingdom, symbolizing "the one without property" and signifying the destitute poor. iwty n.f traced back to the Pyramid Texts era, referred to "the one who has nothing" or simply, the poor. Another term, , iwty-sw emphasized the state of possessing nothing. rA-awy originating in the Middle Kingdom Period, initially meant "poverty and poor," expanding to signify poverty in various texts with the addition of the preposition "in," conveying strength given to the poor. During the First Intermediate Period, sny - mnt denoted going through a crisis or suffering, possibly referencing a widespread disease or plague. It encompassed meanings of sickness, pain, and anguish. The term .” sq-sn introduced in Middle Kingdom texts, conveyed the notion of becoming miserable. sp-Xsy . represented a temporal expression reflecting a period of misery or poverty, with Xsy ,indicating distress or misery. The term qsnt appearing in Middle Kingdom texts, held meanings of painful, difficult, harsh, miserable, emaciated, and in bad condition. Its feminine form, qsn denoted anxiety and turmoil. Finally, tp-qsn encapsulated the essence of misery and unhappiness. In essence, these expressions provide linguistic insights into the multifaceted experience of poverty in ancient Egypt, illustrating the society's keen awareness and efforts to address this pervasive challenge.

Keywords: poverty, poor, suffering, misery, painful, ancient Egypt

Procedia PDF Downloads 49
682 Quantitative Research on the Effects of Following Brands on Twitter on Consumer Brand Attitude

Authors: Yujie Wei

Abstract:

Twitter uses a variety of narrative methods (e.g., messages, featured videos, music, and actual events) to strengthen its cultivation effect. Consumers are receiving mass-produced brand stores or images made by brand managers according to strict market specifications. Drawing on the cultivation theory, this quantitative research investigates how following a brand on Twitter for 12 weeks can cultivate their attitude toward the brand and influence their purchase intentions. We conducted three field experiments on Twitter to test the cultivation effects of following a brand for 12 weeks on consumer attitude toward the followed brand. The cultivation effects were measured by comparing the changes in consumer attitudes before and after they have followed a brand over time. The findings of our experiments suggest that when consumers are exposed to a brand’s stable, pervasive, and recurrent tweets on Twitter for 12 weeks, their attitude toward a brand can be significantly changed, which confirms the cultivating effects on consumer attitude. Also, the results indicate that branding activities on Twitter, when properly implemented, can be very effective in changing consumer attitudes toward a brand, increasing the purchase intentions, and increasing their willingness to spread the word-of-mouth for the brand on social media. The cultivation effects are moderated by brand type and consumer age. The research provides three major marketing implications. First, Twitter marketers should create unique content to engage their brand followers to change their brand attitude through steady, cumulative exposure to the branding activities on Twitter. Second, there is a significant moderating effect of brand type on the cultivation effects, so Twitter marketers should align their branding content with the brand type to better meet the needs and wants of consumers for different types of brands. Finally, Twitter marketers should adapt their tweeting strategies according to the media consumption preferences of different age groups of their target markets. This empirical research proves that content is king.

Keywords: tweeting, cultivation theory, consumer brand attitude, purchase intentions, word-of-mouth

Procedia PDF Downloads 103
681 Exploring the Visual Representations of Neon Signs and Its Vernacular Tacit Knowledge of Neon Making

Authors: Brian Kwok

Abstract:

Hong Kong is well-known for its name as "the Pearl of the Orient", due to its spectacular night-view with vast amount of decorative neon lights on the streets. Neon signs are first used as the pervasive media of communication for all kinds of commercial advertising, ranging from movie theatres to nightclubs and department stores, and later appropriated by artists as medium of artwork. As a well-established visual language, it displays texts in bilingual format due to British's colonial influence, which are sometimes arranged in an opposite reading order. Research on neon signs as a visual representation is rare but significant because they are part of people’s collective memories of the unique cityscapes which associate the shifting values of people's daily lives and culture identity. Nevertheless, with the current policy to remove abandoned neon signs, their total number dramatically declines recently. The Buildings Department found an estimation of 120,000 unauthorized signboards (including neon signs) in Hong Kong in 2013, and the removal of such is at a rate of estimated 1,600 per year since 2006. In other words, the vernacular cultural values and historical continuity of neon signs will gradually be vanished if no immediate action is taken in documenting them for the purpose of research and cultural preservation. Therefore, the Hong Kong Neon Signs Archive project was established in June of 2015, and over 100 neon signs are photo-documented so far. By content analysis, this project will explore the two components of neon signs – the use of visual languages and vernacular tacit knowledge of neon makers. It attempts to answer these questions about Hong Kong's neon signs: 'What are the ways in which visual representations are used to produce our cityscapes and streetscapes?'; 'What are the visual languages and conventions of usage in different business types?'; 'What the intact knowledge are applied when producing these visual forms of neon signs?'

Keywords: cityscapes, neon signs, tacit knowledge, visual representation

Procedia PDF Downloads 296
680 Developing a Framework for Open Source Software Adoption in a Higher Education Institution in Uganda. A case of Kyambogo University

Authors: Kafeero Frank

Abstract:

This study aimed at developing a frame work for open source software adoption in an institution of higher learning in Uganda, with the case of KIU as a study area. There were mainly four research questions based on; individual staff interaction with open source software forum, perceived FOSS characteristics, organizational characteristics and external characteristics as factors that affect open source software adoption. The researcher used causal-correlation research design to study effects of these variables on open source software adoption. A quantitative approach was used in this study with self-administered questionnaire on a purposively and randomly sampled sample of university ICT staff. Resultant data was analyzed using means, correlation coefficients and multivariate multiple regression analysis as statistical tools. The study reveals that individual staff interaction with open source software forum and perceived FOSS characteristics were the primary factors that significantly affect FOSS adoption while organizational and external factors were secondary with no significant effect but significant correlation to open source software adoption. It was concluded that for effective open source software adoption to occur there must be more effort on primary factors with subsequent reinforcement of secondary factors to fulfill the primary factors and adoption of open source software. Lastly recommendations were made in line with conclusions for coming up with Kyambogo University frame work for open source software adoption in institutions of higher learning. Areas of further research recommended include; Stakeholders’ analysis of open source software adoption in Uganda; Challenges and way forward. Evaluation of Kyambogo University frame work for open source software adoption in institutions of higher learning. Framework development for cloud computing adoption in Ugandan universities. Framework for FOSS development in Uganda IT industry

Keywords: open source software., organisational characteristics, external characteristics, cloud computing adoption

Procedia PDF Downloads 67
679 Cybersecurity Engineering BS Degree Curricula Design Framework and Assessment

Authors: Atma Sahu

Abstract:

After 9/11, there will only be cyberwars. The cyberwars increase in intensity the country's cybersecurity workforce's hiring and retention issues. Currently, many organizations have unfilled cybersecurity positions, and to a lesser degree, their cybersecurity teams are understaffed. Therefore, there is a critical need to develop a new program to help meet the market demand for cybersecurity engineers (CYSE) and personnel. Coppin State University in the United States was responsible for developing a cybersecurity engineering BS degree program. The CYSE curriculum design methodology consisted of three parts. First, the ACM Cross-Cutting Concepts standard's pervasive framework helped curriculum designers and students explore connections among the core courses' knowledge areas and reinforce the security mindset conveyed in them. Second, the core course context was created to assist students in resolving security issues in authentic cyber situations involving cyber security systems in various aspects of industrial work while adhering to the NIST standards framework. The last part of the CYSE curriculum design aspect was the institutional student learning outcomes (SLOs) integrated and aligned in content courses, representing more detailed outcomes and emphasizing what learners can do over merely what they know. The CYSE program's core courses express competencies and learning outcomes using action verbs from Bloom's Revised Taxonomy. This aspect of the CYSE BS degree program's design is based on these three pillars: the ACM, NIST, and SLO standards, which all CYSE curriculum designers should know. This unique CYSE curriculum design methodology will address how students and the CYSE program will be assessed and evaluated. It is also critical that educators, program managers, and students understand the importance of staying current in this fast-paced CYSE field.

Keywords: cyber security, cybersecurity engineering, systems engineering, NIST standards, physical systems

Procedia PDF Downloads 80
678 A Survey on Constraint Solving Approaches Using Parallel Architectures

Authors: Nebras Gharbi, Itebeddine Ghorbel

Abstract:

In the latest years and with the advancements of the multicore computing world, the constraint programming community tried to benefit from the capacity of new machines and make the best use of them through several parallel schemes for constraint solving. In this paper, we propose a survey of the different proposed approaches to solve Constraint Satisfaction Problems using parallel architectures. These approaches use in a different way a parallel architecture: the problem itself could be solved differently by several solvers or could be split over solvers.

Keywords: constraint programming, parallel programming, constraint satisfaction problem, speed-up

Procedia PDF Downloads 314
677 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 150
676 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 379