Search results for: digital divide for the disabled
1213 A Network-Theorical Perspective on Music Analysis
Authors: Alberto Alcalá-Alvarez, Pablo Padilla-Longoria
Abstract:
The present paper describes a framework for constructing mathematical networks encoding relevant musical information from a music score for structural analysis. These graphs englobe statistical information about music elements such as notes, chords, rhythms, intervals, etc., and the relations among them, and so become helpful in visualizing and understanding important stylistic features of a music fragment. In order to build such networks, musical data is parsed out of a digital symbolic music file. This data undergoes different analytical procedures from Graph Theory, such as measuring the centrality of nodes, community detection, and entropy calculation. The resulting networks reflect important structural characteristics of the fragment in question: predominant elements, connectivity between them, and complexity of the information contained in it. Music pieces in different styles are analyzed, and the results are contrasted with the traditional analysis outcome in order to show the consistency and potential utility of this method for music analysis.Keywords: computational musicology, mathematical music modelling, music analysis, style classification
Procedia PDF Downloads 1031212 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality
Authors: Heichia Wang, Yalan Chao
Abstract:
Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network
Procedia PDF Downloads 1281211 Digital Homeostasis: Tangible Computing as a Multi-Sensory Installation
Authors: Andrea Macruz
Abstract:
This paper explores computation as a process for design by examining how computers can become more than an operative strategy in a designer's toolkit. It documents this, building upon concepts of neuroscience and Antonio Damasio's Homeostasis Theory, which is the control of bodily states through feedback intended to keep conditions favorable for life. To do this, it follows a methodology through algorithmic drawing and discusses the outcomes of three multi-sensory design installations, which culminated from a course in an academic setting. It explains both the studio process that took place to create the installations and the computational process that was developed, related to the fields of algorithmic design and tangible computing. It discusses how designers can use computational range to achieve homeostasis related to sensory data in a multi-sensory installation. The outcomes show clearly how people and computers interact with different sensory modalities and affordances. They propose using computers as meta-physical stabilizers rather than tools.Keywords: algorithmic drawing, Antonio Damasio, emotion, homeostasis, multi-sensory installation, neuroscience
Procedia PDF Downloads 1081210 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory
Authors: Yi Wang, Greg Hearn, Terry Flew
Abstract:
In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.Keywords: cultural policy, creative industries, creative city, triple Helix
Procedia PDF Downloads 2061209 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 5261208 Visual Analytics in K 12 Education: Emerging Dimensions of Complexity
Authors: Linnea Stenliden
Abstract:
The aim of this paper is to understand emerging learning conditions, when a visual analytics is implemented and used in K 12 (education). To date, little attention has been paid to the role visual analytics (digital media and technology that highlight visual data communication in order to support analytical tasks) can play in education, and to the extent to which these tools can process actionable data for young students. This study was conducted in three public K 12 schools, in four social science classes with students aged 10 to 13 years, over a period of two to four weeks at each school. Empirical data were generated using video observations and analyzed with help of metaphors by Latour. The learning conditions are found to be distinguished by broad complexity characterized by four dimensions. These emerge from the actors’ deeply intertwined relations in the activities. The paper argues in relation to the found dimensions that novel approaches to teaching and learning could benefit students’ knowledge building as they work with visual analytics, analyzing visualized data.Keywords: analytical reasoning, complexity, data use, problem space, visual analytics, visual storytelling, translation
Procedia PDF Downloads 3761207 The Moderating Impacts of Government Support on the Relationship Between Patient Acceptance and Telemedicine Adoption in Malaysia
Authors: Anyia Nduka, Aslan Bin Amad Senin, Ayu Azrin Binti Abdul Aziz
Abstract:
Telemedicine is a rapidly developing discipline with enormous promise for better healthcare results for patients. To meet the demands of patients and the healthcare sector, medical providers must be proficient in telemedicine and also need government funding for infrastructure and core competencies. In this study, we surveyed general hospitals in Kuala Lumpur and Selangor to investigate patient’s impressions of both the positive and negative aspects of government funding for telemedicine and its level of acceptance. This survey was conducted in accordance with the Diffusion of Innovations (DOI) hypothesis; the survey instruments were designed through a Google Form and distributed to patients and every member of the medical team. The findings suggested a framework for categorizing patients' levels of technology use and acceptability, which provided practical consequences for healthcare. We therefore recommend the increase in technical assistance and government-backed funding of telemedicine by bolstering the entire system.Keywords: technology acceptance, quality assurance, digital transformation, cost management.
Procedia PDF Downloads 771206 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 1771205 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents
Authors: Rajender Dahiya
Abstract:
Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention
Procedia PDF Downloads 571204 Self-Stigmatization of Deaf and Hard-of-Hearing Students
Authors: Nadezhda F. Mikahailova, Margarita E. Fattakhova, Mirgarita A. Mironova, Ekaterina V. Vyacheslavova, Vladimir A. Mikahailov
Abstract:
Stigma is a significant obstacle to the successful adaptation of deaf students to the conditions of an educational institution, especially for those who study in inclusion. The aim of the study was to identify the spheres of life which are the most significant for developing of the stigma of deaf students; to assess the influence of factors associated with deafness on the degree of their self-stigmatization (time and degree of hearing loss, type of education - inclusion / differentiation) and to find out who is more prone to stigma - which characteristics of personality, identity, mental health and coping are specific for those deaf who demonstrates stigmatizing attitudes. The study involved 154 deaf and hard-of-hearing students (85 male and 69 female) aged from 18 to 45 years - 28 students of the Herzen State Pedagogical University (St. Petersburg), who study in inclusion, 108 students of the National Research Technological University and 18 students of the Aviation Technical College (Kazan) - students in groups with a sign language interpreter. We used the following methods: modified questionnaire 'Self-assessment and coping strategies' (Jambor & Elliot, 2005), Scale of self-esteem (Rosenberg et al, 1995), 'Big-Five' (Costa&McCrae, 1997), TRF (Becker, 1989), WCQ (Lazarus & Folkman, 1988), self-stigma scale (Mikhailov, 2008). The severity of self-stigmatization of deaf and hard of hearing students was determined by the degree of deafness and the time they live with hearing loss, learning conditions, the type of self-identification (acculturation), personality traits, and the specifics of coping behavior. Persons with congenital hearing loss more often noted a benevolent and sympathetic attitude towards them on the part of the hearers and less often, due to deafness, limited themselves to visiting public places than late deaf people, which indicates 'get rid of' the experience of their defect and normalization of the state. Students studying in conditions of inclusion more often noted the dismissive attitude of society towards deaf people. Individuals with mild to moderate hearing loss were more likely to fear marriage and childbearing because of their deafness than students with profound hearing loss. Those who considered themselves disabled (49% of all respondents) were more inclined to cope with seeking social support and less used 'distancing' coping. Those who believed that their quality of life and social opportunities were most influenced by the attitude of society towards the deaf (39%) were distinguished by a less pronounced sense of self-worth, a desire for autonomy, and frequent usage of 'avoidance' coping strategies. 36.4% of the respondents noted that there have been situations in their lives when people learned that they are deaf, began to treat them worse. These respondents had predominantly deaf acculturation, but more often, they used 'bicultural skills,' specific coping for the deaf, and had a lower level of extraversion and emotional stability. 31.2% of the respondents tried to hide from others that they have hearing problems. They considered themselves to be in a culture of hearing, used coping strategies 'bicultural skills,' and had lower levels of extraversion, cooperation, and emotional stability. Acknowledgment: Supported by the RFBR № 19-013-0040Keywords: acculturation, coping, deafness, stigmatization
Procedia PDF Downloads 2341203 A Model of the Adoption of Maritime Autonomous Surface Ship
Authors: Chin-Shan Lu, Yi-Pei Liu
Abstract:
This study examines the factors influencing the adoption of MASS in Taiwan's shipping industry. Digital technology and unmanned vehicle advancements have enhanced efficiency and reduced environmental impact in the shipping industry. The IMO has set regulations to promote low-carbon emissions and autonomous ship technology. Using the TOE framework and DOI theory, a research model was constructed, and data from 132 Taiwanese shipping companies were collected via a questionnaire survey. A structural equation modeling (SEM) was conducted to examine the relationships between variables. Results show that technological and environmental factors significantly influence operators' attitudes toward MASS, while organizational factors impact their willingness to adopt. Enhancing technological support, internal resource allocation, top management support, and cost management are crucial for promoting adoption. This study identifies key factors and provides recommendations for adopting autonomous ships in Taiwan's shipping industry.Keywords: MASS, technology-organization-environment, diffusion of innovations theory, shipping industry
Procedia PDF Downloads 241202 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking
Authors: Zayar Phyo, Ei Chaw Htoon
Abstract:
In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel
Procedia PDF Downloads 1511201 Exploring the Applications of Modular Forms in Cryptography
Authors: Berhane Tewelday Weldhiwot
Abstract:
This research investigates the pivotal role of modular forms in modern cryptographic systems, particularly focusing on their applications in secure communications and data integrity. Modular forms, which are complex analytic functions with rich arithmetic properties, have gained prominence due to their connections to number theory and algebraic geometry. This study begins by outlining the fundamental concepts of modular forms and their historical development, followed by a detailed examination of their applications in cryptographic protocols such as elliptic curve cryptography and zero-knowledge proofs. By employing techniques from analytic number theory, the research delves into how modular forms can enhance the efficiency and security of cryptographic algorithms. The findings suggest that leveraging modular forms not only improves computational performance but also fortifies security measures against emerging threats in digital communication. This work aims to contribute to the ongoing discourse on integrating advanced mathematical theories into practical applications, ultimately fostering innovation in cryptographic methodologies.Keywords: modular forms, cryptography, elliptic curves, applications, mathematical theory
Procedia PDF Downloads 181200 Performance Evaluation of Various Displaced Left Turn Intersection Designs
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model
Procedia PDF Downloads 3101199 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.Keywords: augmented reality, cultural heritage, GPS, serious game
Procedia PDF Downloads 951198 Impact of E-Commerce Logistics Service Quality on Online Customer Satisfaction in UAE
Authors: Leena Wanganoo
Abstract:
In this digital age with the mushrooming of online companies across the globe has led to an unprecedented new business model. The frequency of online purchasing varies across the globe, but trend shows a steep upward movement. From Generation X to the Millennial the consumer not only wants to order the product with the click of mouse but also very demanding service quality during pre to post-transaction stage. The existing research examines the impact of website quality on the on behavioral intentions in e-services customers and has not adequately recognized the quality of e-commerce logistics perceived by the customer.In order to address this gap, this study examines the relationship among the logistics service quality, satisfaction, and loyalty. Drawing upon a sample of 350 millennial customers from various regions of UAE will work within the framework of structural equation modeling (SEM). Finally, the study would use Importance-Performance analysis (IPA) to discuss the relations of the level of customers’ expected logistics service quality and level of customers’ perceived logistics serviced quality.Keywords: logistics service quality, customer satisfaction, loyalty, electronic commerce
Procedia PDF Downloads 1701197 Mobile Device Applications in Physical Education: Investigating New Pedagogical Possibilities
Authors: Danica Vidotto
Abstract:
Digital technology is continuing to disrupt and challenge local conventions of teaching and education. As mobile devices continue to make their way into contemporary classrooms, educators need new pedagogies incorporating information communication technology to help reform the learning environment. In physical education, however, this can seem controversial as physical inactivity is often related to an excess of screen-time. This qualitative research project is an investigation on how physical educators use mobile device applications (apps) in their pedagogy and to what end. A comprehensive literature review is included to examine and engage current academic research of new pedagogies and technology, and their relevance to physical activity. Data were collected through five semi-structured interviews resulting in three overarching themes; i) changing pedagogies in physical education; ii) the perceived benefits and experienced challenges of using apps; and iii) apps, physical activity, and physical education. This study concludes with a discussion of the findings engaging the literature, discussing the implications of findings, and recommendations for future research.Keywords: applications (apps), mobile devices, new pedagogies, physical education
Procedia PDF Downloads 1931196 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing
Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama
Abstract:
The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture
Procedia PDF Downloads 2901195 Revealing Celtic and Norse Mythological Depths through Dragon Age’s Tattoos and Narratives
Authors: Charles W. MacQuarrie, Rachel R. Tatro Duarte
Abstract:
This paper explores the representation of medieval identity within the world of games such as Dragon Age, Elden Ring, Hellblade: Senua’s sacrifice, fantasy role-playing games that draw effectively and problematically on Celtic and Norse mythologies. Focusing on tattoos, onomastics, and accent as visual and oral markers of status and ethnicity, this study analyzes how the game's interplay between mythology, character narratives, and visual storytelling enriches the themes and offers players an immersive, but sometimes baldly ahistorical, connection to ancient mythologies and contemporary digital storytelling. Dragon Age is a triple a game series, Hellblade Senua’s Sacrifice, and Elden Ring of gamers worldwide with its presentation of an idealized medieval world, inspired by the lore of Celtic and Norse mythologies. This paper sets out to explore the intricate relationships between tattoos, accent, and character narratives in the game, drawing parallels to themes,heroic figures and gods from Celtic and Norse mythologies. Tattoos as Mythic and Ethnic Markers: This study analyzes how tattoos in Dragon Age visually represent mythological elements from both Celtic and Norse cultures, serving as conduits of cultural identity and narratives. The nature of these tattoos reflects the slave, criminal, warrior associations made in classical and medieval literature, and some of the episodes concerning tattoos in the games have either close analogs or sources in literature. For example the elvish character Solas, in Dragon Age Inquisition, removes a slave tattoo from the face of a lower status elf in an episode that is reminiscent of Bridget removing the stigmata from Connallus in the Vita Prima of Saint Bridget Character Narratives: The paper examines how characters' personal narratives in the game parallel the archetypal journeys of Celtic heroes and Norse gods, with a focus on their relationships to mythic themes. In these games the Elves usually have Welsh or Irish accents, are close to nature, magically powerful, oppressed by apparently Anglo-Saxon humans and Norse dwarves, and these elves wear facial tattoos. The Welsh voices of fairies and demons is older than the reference in Shakespeare’s Merry Wives of Windsor or even the Anglo-Saxon Life of Saint Guthlac. The English speaking world, and the fantasy genre of literature and gaming, undoubtedly driven by Tolkien, see Elves as Welsh speakers, and as having Welsh accents when speaking English Comparative Analysis: A comparative approach is employed to reveal connections, adaptations, and unique interpretations of the motifs of tattoos and narrative themes in Dragon Age, compared to those found in Celtic and Norse mythologies. Methodology: The study uses a comparative approach to examine the similarities and distinctions between Celtic and Norse mythologies and their counterparts in video games. The analysis encompasses character studies, narrative exploration, visual symbolism, and the historical context of Celtic and Norse cultures. Mythic Visuals: This study showcases how tattoos, as visual symbols, encapsulate mythic narratives, beliefs, and cultural identity, echoing Celtic and Norse visual motifs. Archetypal Journeys: The paper analyzes how character arcs mirror the heroic journeys of Celtic and Norse mythological figures, allowing players to engage with mythic narratives on a personal level. Cultural Interplay: The study discusses how the game's portrayal of tattoos and narratives both preserves and reinterprets elements from Celtic and Norse mythologies, fostering a connection between ancient cultures and modern digital storytelling. Conclusion: By exploring the interconnectedness of tattoos and character narratives in Dragon Age, this paper reveals the game series' ability to act as a bridge between ancient mythologies and contemporary gaming. By drawing inspiration from Celtic heroes and Norse gods and translating them into digital narratives and visual motifs, Dragon Age offers players a multi-dimensional engagement with mythic themes and a unique lens through which to appreciate the enduring allure of these cultures.Keywords: comparative analysis, character narratives, video games and literature, tattoos, immersive storytelling, character development, mythological influences, Celtic mythology, Norset mythology
Procedia PDF Downloads 731194 Design of a Telemetry, Tracking, and Command Radio-Frequency Receiver for Small Satellites Based on Commercial Off-The-Shelf Components
Authors: A. Lovascio, A. D’Orazio, V. Centonze
Abstract:
From several years till now the aerospace industry is developing more and more small satellites for Low-Earth Orbit (LEO) missions. Such satellites have a low cost of making and launching since they have a size and weight smaller than other types of satellites. However, because of size limitations, small satellites need integrated electronic equipment based on digital logic. Moreover, the LEOs require telecommunication modules with high throughput to transmit to earth a big amount of data in a short time. In order to meet such requirements, in this paper we propose a Telemetry, Tracking & Command module optimized through the use of the Commercial Off-The-Shelf components. The proposed approach exploits the major flexibility offered by these components in reducing costs and optimizing the performance. The method has been applied in detail for the design of the front-end receiver, which has a low noise figure (1.5 dB) and DC power consumption (smaller than 2 W). Such a performance is particularly attractive since it allows fulfilling the energy budget stringent constraints that are typical for LEO small platforms.Keywords: COTS, LEO, small-satellite, TT&C
Procedia PDF Downloads 1311193 On-Line Impulse Buying and Cognitive Dissonance: The Moderating Role of the Positive Affective State
Authors: G. Mattia, A. Di Leo, L. Principato
Abstract:
The purchase impulsiveness is preceded by a lack of self-control: consequently, it is legitimate to believe that a consumer with a low level of self-control can result in a higher probability of cognitive dissonance. Moreover, the process of purchase is influenced by the pre-existing affective state in a considerable way. With reference to on-line purchases, digital behavior cannot be merely ascribed to the rational sphere, given the speed and ease of transactions and the hedonistic dimension of purchases. To our knowledge, this research is among the first cases of verification of the effect of moderation exerted by the positive affective state in the on-line impulse purchase of products with a high expressive value such as a smartphone on the occurrence of cognitive dissonance. To this aim, a moderation analysis was conducted on a sample of 212 impulsive millennials buyers. Three scales were adopted to measure the constructs of interest: IBTS for impulsivity, PANAS for the affective state, Sweeney for cognitive dissonance. The analysis revealed that positive affective state does not affect the onset of cognitive dissonance.Keywords: cognitive dissonance, impulsive buying, online shopping, online consumer behavior
Procedia PDF Downloads 1541192 Investigation of Chip Formation Characteristics during Surface Finishing of HDPE Samples
Authors: M. S. Kaiser, S. Reaz Ahmed
Abstract:
Chip formation characteristics are investigated during surface finishing of high density polyethylene (HDPE) samples using a shaper machine. Both the cutting speed and depth of cut are varied continually to enable observations under various machining conditions. The generated chips are analyzed in terms of their shape, size, and deformation. Their physical appearances are also observed using digital camera and optical microscope. The investigation shows that continuous chips are obtained for all the cutting conditions. It is observed that cutting speed is more influential than depth of cut to cause dimensional changes of chips. Chips curl radius is also found to increase gradually with the increase of cutting speed. The length of continuous chips remains always smaller than the job length, and the corresponding discrepancies are found to be more prominent at lower cutting speed. Microstructures of the chips reveal that cracks are formed at higher cutting speeds and depth of cuts, which is not that significant at low depth of cut.Keywords: HDPE, surface-finishing, chip formation, deformation, roughness
Procedia PDF Downloads 1461191 Key Performance Indicators and the Model for Achieving Digital Inclusion for Smart Cities
Authors: Khalid Obaed Mahmod, Mesut Cevik
Abstract:
The term smart city has appeared recently and was accompanied by many definitions and concepts, but as a simplified and clear definition, it can be said that the smart city is a geographical location that has gained efficiency and flexibility in providing public services to citizens through its use of technological and communication technologies, and this is what distinguishes it from other cities. Smart cities connect the various components of the city through the main and sub-networks in addition to a set of applications and thus be able to collect data that is the basis for providing technological solutions to manage resources and provide services. The basis of the work of the smart city is the use of artificial intelligence and the technology of the Internet of Things. The work presents the concept of smart cities, the pillars, standards, and evaluation indicators on which smart cities depend, and the reasons that prompted the world to move towards its establishment. It also provides a simplified hypothetical way to measure the ideal smart city model by defining some indicators and key pillars, simulating them with logic circuits, and testing them to determine if the city can be considered an ideal smart city or not.Keywords: factors, indicators, logic gates, pillars, smart city
Procedia PDF Downloads 1501190 The Reach of Shopping Center Layout Form on Subway Based on Kernel Density Estimate
Authors: Wen Liu
Abstract:
With the rapid progress of modern cities, the railway construction must be developing quickly in China. As a typical high-density country, shopping center on the subway should be one important factor during the process of urban development. The paper discusses the influence of the layout of shopping center on the subway, and put it in the time and space’s axis of Shanghai urban development. We use the digital technology to establish the database of relevant information. And then get the change role about shopping center on subway in Shanghaiby the Kernel density estimate. The result shows the development of shopping center on subway has a relationship with local economic strength, population size, policy support, and city construction. And the suburbanization trend of shopping center would be increasingly significant. By this case research, we could see the Kernel density estimate is an efficient analysis method on the spatial layout. It could reveal the characters of layout form of shopping center on subway in essence. And it can also be applied to the other research of space form.Keywords: Shanghai, shopping center on the subway, layout form, Kernel density estimate
Procedia PDF Downloads 3151189 A Socio-Technical Approach to Cyber-Risk Assessment
Authors: Kitty Kioskli, Nineta Polemi
Abstract:
Evaluating the levels of cyber-security risks within an enterprise is most important in protecting its information system, services and all its digital assets against security incidents (e.g. accidents, malicious acts, massive cyber-attacks). The existing risk assessment methodologies (e.g. eBIOS, OCTAVE, CRAMM, NIST-800) adopt a technical approach considering as attack factors only the capability, intention and target of the attacker, and not paying attention to the attacker’s psychological profile and personality traits. In this paper, a socio-technical approach is proposed in cyber risk assessment, in order to achieve more realistic risk estimates by considering the personality traits of the attackers. In particular, based upon principles from investigative psychology and behavioural science, a multi-dimensional, extended, quantifiable model for an attacker’s profile is developed, which becomes an additional factor in the cyber risk level calculation.Keywords: attacker, behavioural models, cyber risk assessment, cybersecurity, human factors, investigative psychology, ISO27001, ISO27005
Procedia PDF Downloads 1651188 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a CriticalEthical English Pedagogy
Authors: Farah Vierra
Abstract:
Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth”, defined by Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those who these truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential in providing marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate has been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of which is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to devise instructional strategies that not only teach students the ways they can critically and ethically scrutinise truth claims but also teach them to mediate the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In hopes of providing educators with the roadmap to do so, this paper will first examine the challenges that confront students as a result of post-truth. Following which, the paper will elucidate the role English education can play in helping students overcome the complex ramifications of post-truth. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.Keywords: post-truth, pedagogy, ethics, English, education
Procedia PDF Downloads 721187 Data Gathering and Analysis for Arabic Historical Documents
Authors: Ali Dulla
Abstract:
This paper introduces a new dataset (and the methodology used to generate it) based on a wide range of historical Arabic documents containing clean data simple and homogeneous-page layouts. The experiments are implemented on printed and handwritten documents obtained respectively from some important libraries such as Qatar Digital Library, the British Library and the Library of Congress. We have gathered and commented on 150 archival document images from different locations and time periods. It is based on different documents from the 17th-19th century. The dataset comprises differing page layouts and degradations that challenge text line segmentation methods. Ground truth is produced using the Aletheia tool by PRImA and stored in an XML representation, in the PAGE (Page Analysis and Ground truth Elements) format. The dataset presented will be easily available to researchers world-wide for research into the obstacles facing various historical Arabic documents such as geometric correction of historical Arabic documents.Keywords: dataset production, ground truth production, historical documents, arbitrary warping, geometric correction
Procedia PDF Downloads 1681186 'iTheory': Mobile Way to Music Fundamentals
Authors: Marina Karaseva
Abstract:
The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.Keywords: ear training, innovation in music education, music theory, mobile devices
Procedia PDF Downloads 2051185 Implementation of ALD in Product Development: Study of ROPS to Improve Energy Absorption Performance Using Absorption Part
Authors: Zefry Darmawan, Shigeyuki Haruyama, Ken Kaminishi
Abstract:
Product development is a big issue in the industrial competition and takes a serious part in development of technology. Product development process could adapt high changes of market needs and transform into engineering concept in order to produce high-quality product. One of the latest methods in product development is Analysis-Led-Design (ALD). It utilizes digital engineering design tools with finite analysis to perform product robust analysis and valuable for product reliability assurance. Heavy machinery which operates under severe condition should maintain safety to the customer when faced with potential hazard. Cab frame should able to absorb the energy while collision. Through ALD, a series of improvement of cab frame to increase energy absorption was made and analyzed. Improvement was made by modifying shapes of frame and-or install absorption device in certain areas. Simulation result showed that install absorption device could increase absorption energy than modifying shape.Keywords: ALD, ROPS, energy absorption, cab frame
Procedia PDF Downloads 3711184 Implementation of Elliptic Curve Cryptography Encryption Engine on a FPGA
Authors: Mohamad Khairi Ishak
Abstract:
Conventional public key crypto systems such as RSA (Ron Rivest, Adi Shamir and Leonard Adleman), DSA (Digital Signature Algorithm), and Elgamal are no longer efficient to be implemented in the small, memory constrained devices. Elliptic Curve Cryptography (ECC), which allows smaller key length as compared to conventional public key crypto systems, has thus become a very attractive choice for many applications. This paper describes implementation of an elliptic curve cryptography (ECC) encryption engine on a FPGA. The system has been implemented in 2 different key sizes, which are 131 bits and 163 bits. Area and timing analysis are provided for both key sizes for comparison. The crypto system, which has been implemented on Altera’s EPF10K200SBC600-1, has a hardware size of 5945/9984 and 6913/9984 of logic cells for 131 bits implementation and 163 bits implementation respectively. The crypto system operates up to 43 MHz, and performs point multiplication operation in 11.3 ms for 131 bits implementation and 14.9 ms for 163 bits implementation. In terms of speed, our crypto system is about 8 times faster than the software implementation of the same system.Keywords: elliptic curve cryptography, FPGA, key sizes, memory
Procedia PDF Downloads 323