Search results for: digital libraries
1545 Developing Cyber Security Asset Mangement Framework for UK Rail
Authors: Shruti Kohli
Abstract:
The sophistication and pervasiveness of cyber-attacks are constantly growing, driven partly by technological progress, profitable applications in organized crime and state-sponsored innovation. The modernization of rail control systems has resulted in an increasing reliance on digital technology and increased the potential for security breaches and cyber-attacks. This research track showcases the need for developing a secure reusable scalable framework for enhancing cyber security of rail assets. A cyber security framework has been proposed that is being developed to detect the tell-tale signs of cyber-attacks against industrial assets.Keywords: cyber security, rail asset, security threat, cyber ontology
Procedia PDF Downloads 4311544 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 4791543 The Construction of Multilingual Online Gaming Community
Authors: Dina Alnefaie
Abstract:
This poster presents a study of a Discord private server with thirteen multilingual gamers, aiming to explore the elements that construct a multilingual online gaming community. The study focuses on the communication practices of four Saudi female and male gamers, using various data collection methods, including online observations through recorded videos and screenshots, interviews, and informal conversations for one year. The primary findings show that translanguaging was a prominent feature of their verbal and textual communication practices. Besides, these practices that mostly accompany cultural ones were used to facilitate communication and express their identities in an intercultural context.Keywords: online community construction, perceptions, multilingualism, digital identity
Procedia PDF Downloads 861542 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops
Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann
Abstract:
The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule
Procedia PDF Downloads 1521541 Comparison of Noise Emissions in the Interior of Passenger Cars
Authors: Martin Kendra, Tomas Skrucany, Jaroslav Masek
Abstract:
The noise is one of the negative elements influencing the human health. This article is due to the measurement of noise emitted by road vehicle and its parts during the operation. Measurement was done in the interior of common passenger cars with a digital sound meter. The results compare the noise value in different cars with different body shape, which influences the driver’s health. Transport has considerable ecological effects, many of them detrimental to environmental sustainability. Roads and traffic exert a variety of direct and mostly detrimental effects on nature.Keywords: driver, noise measurement, passenger road vehicle, road transport
Procedia PDF Downloads 4511540 Software User Experience Enhancement through User-Centered Design and Co-design Approach
Authors: Shan Wang, Fahad Alhathal, Hari Subramanian
Abstract:
User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023 in the UK; it aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight co-design workshops with a diverse group of 11 individuals. Throughout these co-design workshops, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement within three insights. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.Keywords: user experiences design, user centered design, co-design approach, knowledge management tool
Procedia PDF Downloads 131539 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 1561538 Analysis of Genic Expression of Honey Bees Exposed to Sublethal Pesticides Doses Using the Transcriptome Technique
Authors: Ricardo de Oliveira Orsi, Aline Astolfi, Daniel Diego Mendes, Isabella Cristina de Castro Lippi, Jaine da Luz Scheffer, Yan Souza Lima, Juliana Lunardi, Giovanna do Padro Ribeiro, Samir Moura Kadri
Abstract:
NECTAR Brazilian group (Center of Education, Science, and Technology in Rational Beekeeping) conducted studies on the pesticides honey bees effects using the transcriptome sequencing (RNA-Seq) analyzes for gene expression studies. In this way, we analyzed the effects of Pyraclostrobin and Fipronil on the honey bees with 21 old-days (forager) in laboratory conditions. For this, frames containing sealed brood were removed from the beehives and maintenance on the stove (32°C and 75% humidity) until the bees were born. So, newly emerged workers were marked on the pronotum with a non-toxic pen and reintroduced into their original hives. After 21 days, 120 marked bees were collected with an entomological forces and immediately stored in Petri dishes, perforated to ensure ventilation, and kept fasted for 3 hours. These honeybees were exposed to food contaminated or not with the sublethal dose of Pyraclostrobin (850 ppb/bee) or Fipronil (2.5 ppb/bee). After four hours of exposure, 15 bees from each treatment were referred to transcriptome analysis. Total RNA analysis was extracted from the brain pools (03 brains per pool) using the TRIzol® reagent protocol according to the manufacturer's instructions. cDNA libraries were constructed, and the FASTQC program was used to check adapter content and assess the quality of raw reads. Differential expression analysis was performed with the DESeq2 package. Genes that had an adjusted value of less than 0.05 were considered to be significantly up-regulated. Regarding the Pyraclostrobin, alterations were observed in the pattern of 17 gene related to of antioxidant system, cellular respiration, glucose metabolism, and regulation of juvenile hormone and the hormone insulin. Glyphosate altered the 10 gene related to the digestive system, exoskeleton composition, vitamin E transport, and antioxidant system. The results indicate that the necessity of studies using the sublethal doses to evaluate the pesticides uses and risks on crops and its effects on the honey bees.Keywords: beekeeping, honey bees, pesticides, transcriptome
Procedia PDF Downloads 1261537 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 1601536 Advancements in Smart Home Systems: A Comprehensive Exploration in Electronic Engineering
Authors: Chukwuka E. V., Rowling J. K., Rushdie Salman
Abstract:
The field of electronic engineering encompasses the study and application of electrical systems, circuits, and devices. Engineers in this discipline design, analyze and optimize electronic components to develop innovative solutions for various industries. This abstract provides a brief overview of the diverse areas within electronic engineering, including analog and digital electronics, signal processing, communication systems, and embedded systems. It highlights the importance of staying abreast of advancements in technology and fostering interdisciplinary collaboration to address contemporary challenges in this rapidly evolving field.Keywords: smart home engineering, energy efficiency, user-centric design, security frameworks
Procedia PDF Downloads 891535 Sensory Ethnography and Interaction Design in Immersive Higher Education
Authors: Anna-Kaisa Sjolund
Abstract:
The doctoral thesis examines interaction design and sensory ethnography as tools to create immersive education environments. In recent years, there has been increasing interest and discussions among researchers and educators on immersive education like augmented reality tools, virtual glasses and the possibilities to utilize them in education at all levels. Using virtual devices as learning environments it is possible to create multisensory learning environments. Sensory ethnography in this study refers to the way of the senses consider the impact on the information dynamics in immersive learning environments. The past decade has seen the rapid development of virtual world research and virtual ethnography. Christine Hine's Virtual Ethnography offers an anthropological explanation of net behavior and communication change. Despite her groundbreaking work, time has changed the users’ communication style and brought new solutions to do ethnographical research. The virtual reality with all its new potential has come to the fore and considering all the senses. Movie and image have played an important role in cultural research for centuries, only the focus has changed in different times and in a different field of research. According to Karin Becker, the role of image in our society is information flow and she found two meanings what the research of visual culture is. The images and pictures are the artifacts of visual culture. Images can be viewed as a symbolic language that allows digital storytelling. Combining the sense of sight, but also the other senses, such as hear, touch, taste, smell, balance, the use of a virtual learning environment offers students a way to more easily absorb large amounts of information. It offers also for teachers’ different ways to produce study material. In this article using sensory ethnography as research tool approaches the core question. Sensory ethnography is used to describe information dynamics in immersive environment through interaction design. Immersive education environment is understood as three-dimensional, interactive learning environment, where the audiovisual aspects are central, but all senses can be taken into consideration. When designing learning environments or any digital service, interaction design is always needed. The question what is interaction design is justified, because there is no simple or consistent idea of what is the interaction design or how it can be used as a research method or whether it is only a description of practical actions. When discussing immersive learning environments or their construction, consideration should be given to interaction design and sensory ethnography.Keywords: immersive education, sensory ethnography, interaction design, information dynamics
Procedia PDF Downloads 1391534 Being Chinese Online: Discursive (Re)Production of Internet-Mediated Chinese National Identity
Authors: Zhiwei Wang
Abstract:
Much emphasis has been placed on the political dimension of digitised Chinese national(ist) discourses and their embodied national identities, which neglects other important dimensions constitutive of their discursive nature. A further investigation into how Chinese national(ist) discourses are daily (re)shaped online by diverse socio-political actors (especially ordinary users) is crucial, which can contribute to not only deeper understandings of Chinese national sentiments on China’s Internet beyond the excessive focus on their passionate, political-charged facet but also richer insights into the socio-technical ecology of the contemporary Chinese digital (and physical) world. This research adopts an ethnographic methodology, by which ‘fieldsites’ are Sina Weibo and bilibili. The primary data collection method is virtual ethnographic observation on everyday national(ist) discussions on both platforms. If data obtained via observations do not suffice to answer research questions, in-depth online qualitative interviews with ‘key actors’ identified from those observations in discursively (re)producing Chinese national identity on each ‘fieldsite’ will be conducted, to complement data gathered through the first method. Critical discourse analysis is employed to analyse data. During the process of data coding, NVivo is utilised. From November 2021 to December 2022, 35 weeks’ digital ethnographic observations have been conducted, with 35 sets of fieldnotes obtained. The strategy adopted for the initial stage of observations was keyword searching, which means typing into the search box on Sina Weibo and bilibili any keywords related to China as a nation and then observing the search results. Throughout 35 weeks’ online ethnographic observations, six keywords have been employed on Sina Weibo and two keywords on bilibili. For 35 weeks’ observations, textual content created by ordinary users have been concentrated much upon. Based on the fieldnotes of the first week’s observations, multifarious national(ist) discourses on Sina Weibo and bilibili have been found, targeted both at national ‘Others’ and ‘Us’, both on the historical and real-world dimension, both aligning with and differing from or even conflicting with official discourses, both direct national(ist) expressions and articulations of sentiments in the name of presentation of national(ist) attachments but for other purposes. Second, Sina Weibo and bilibili users have agency in interpreting and deploying concrete national(ist) discourses despite the leading role played by the government and the two platforms in deciding on the basic framework of national expressions. Besides, there are also disputes and even quarrels between users in terms of explanations for concrete components of ‘nation-ness’ and (in)direct dissent to officially defined ‘mainstream’ discourses to some extent, though often expressed much more mundanely, discursively and playfully. Third, the (re)production process of national(ist) discourses on Sina Weibo and bilibili depends upon not only technical affordances and limitations of the two sites but also, to a larger degree, some established socio-political mechanisms and conventions in the offline China, e.g., the authorities’ acquiescence of citizens’ freedom in understanding and explaining concrete elements of national discourses while setting the basic framework of national narratives to the extent that citizens’ own national(ist) expressions do not reach political bottom lines and develop into mobilising power to shake social stability.Keywords: national identity, national(ist) discourse(s), everyday nationhood/nationalism, Chinese nationalism, digital nationalism
Procedia PDF Downloads 961533 Inappropriate Effects Which the Use of Computer and Playing Video Games Have on Young People
Authors: Maja Ruzic-Baf, Mirjana Radetic-Paic
Abstract:
The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games. The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games.Keywords: addiction to video games, behaviour, ICT, young people
Procedia PDF Downloads 5471532 Error Analysis of Wavelet-Based Image Steganograhy Scheme
Authors: Geeta Kasana, Kulbir Singh, Satvinder Singh
Abstract:
In this paper, a steganographic scheme for digital images using Integer Wavelet Transform (IWT) is proposed. The cover image is decomposed into wavelet sub bands using IWT. Each of the subband is divided into blocks of equal size and secret data is embedded into the largest and smallest pixel values of each block of the subband. Visual quality of stego images is acceptable as PSNR between cover image and stego is above 40 dB, imperceptibility is maintained. Experimental results show better tradeoff between capacity and visual perceptivity compared to the existing algorithms. Maximum possible error analysis is evaluated for each of the wavelet subbands of an image. Procedia PDF Downloads 5051531 Durability Analysis of a Knuckle Arm Using VPG System
Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee
Abstract:
A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)
Procedia PDF Downloads 4201530 Gender Differences in Attitudes to Technology in Primary Education
Authors: Radek Novotný, Martina Maněnová
Abstract:
This article presents a summary of reviews on gender differences in perception of information and communication technology (ICT) by pupils in primary education. The article outlines the meaning of ICT in primary education then summarizes different studies of the use of ICT in primary education from the point of view of gender. The article also presents the specific differences of gender in the knowledge of modalities of use of specialized digital tools and the perception and value assigned to ICT, accordingly the article provides insight into the background of gender differences in performance in relation to ICT to determinate the complex meaning of pupils attitudes to the ICT.Keywords: ICT in primary education, attitudes to ICT, gender differences, gender and ICT
Procedia PDF Downloads 4861529 Media Effects in Metamodernity
Authors: D. van der Merwe
Abstract:
Despite unprecedented changes in the media formats, typologies, delivery channels, and content that can be seen between Walter Benjamin’s writings from the era of modernity and those observable in the contemporary era of metamodernity, parallels can be drawn between the media effects experienced by audiences across the temporal divide. This paper will explore alignments between these two eras as evidenced by various media effects. First, convergence in the historical paradigm of film will be compared with the same effect as seen within the digital domain. Second, the uses and gratifications theory will be explored to delineate parallels in terms of user behaviours across both eras, regardless of medium. Third, cultivation theory and its role in manipulation via the media in both modernity and metamodernity will be discussed. Lastly, similarities between the archetypal personae populating each era will be unpacked.Keywords: convergence, cultivation theory, media effects, metamodernity, uses and gratifications theory
Procedia PDF Downloads 211528 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 1191527 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 1331526 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case
Authors: Andrea Ando
Abstract:
The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law
Procedia PDF Downloads 911525 The Democratization of 3D Capturing: An Application Investigating Google Tango Potentials
Authors: Carlo Bianchini, Lorenzo Catena
Abstract:
The appearance of 3D scanners and then, more recently, of image-based systems that generate point clouds directly from common digital images have deeply affected the survey process in terms of both capturing and 2D/3D modelling. In this context, low cost and mobile systems are increasingly playing a key role and actually paving the way to the democratization of what in the past was the realm of few specialized technicians and expensive equipment. The application of Google Tango on the ancient church of Santa Maria delle Vigne in Pratica di Mare – Rome presented in this paper is one of these examples.Keywords: the architectural survey, augmented/mixed/virtual reality, Google Tango project, image-based 3D capturing
Procedia PDF Downloads 1521524 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 971523 Pay Per Click Attribution: Effects on Direct Search Traffic and Purchases
Authors: Toni Raurich-Marcet, Joan Llonch-Andreu
Abstract:
This research is focused on the relationship between Search Engine Marketing (SEM) and traditional advertising. The dominant assumption is that SEM does not help brand awareness and only does it in session as if it were the cost of manufacturing the product being sold. The study is methodologically developed using an experiment where the effects were determined to analyze the billboard effect. The research allowed the cross-linking of theoretical and empirical knowledge on digital marketing. This paper has validated this marketing generates retention as traditional advertising would by measuring brand awareness and its improvements. This changes the way performance and brand campaigns are split within marketing departments, effectively rebalancing budgets moving forward.Keywords: attribution, performance marketing, SEM, marketplaces
Procedia PDF Downloads 1321522 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 761521 Motion Effects of Arabic Typography on Screen-Based Media
Authors: Ibrahim Hassan
Abstract:
Motion typography is one of the most important types of visual communication based on display. Through the digital display media, we can control the text properties (size, direction, thickness, color, etc.). The use of motion typography in visual communication made it have several images. We need to adjust the terminology and clarify the different differences between them, so relying on the word motion typography -considered a general term- is not enough to separate the different communicative functions of the moving text. In this paper, we discuss the different effects of motion typography on Arabic writing and how we can achieve harmony between the movement and the letterform, and we will, during our experiments, present a new type of text movement.Keywords: Arabic typography, motion typography, kinetic typography, fluid typography, temporal typography
Procedia PDF Downloads 1641520 A Fast, Reliable Technique for Face Recognition Based on Hidden Markov Model
Authors: Sameh Abaza, Mohamed Ibrahim, Tarek Mahmoud
Abstract:
Due to the development in the digital image processing, its wide use in many applications such as medical, security, and others, the need for more accurate techniques that are reliable, fast and robust is vehemently demanded. In the field of security, in particular, speed is of the essence. In this paper, a pattern recognition technique that is based on the use of Hidden Markov Model (HMM), K-means and the Sobel operator method is developed. The proposed technique is proved to be fast with respect to some other techniques that are investigated for comparison. Moreover, it shows its capability of recognizing the normal face (center part) as well as face boundary.Keywords: HMM, K-Means, Sobel, accuracy, face recognition
Procedia PDF Downloads 3351519 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 721518 The Ethics of Documentary Filmmaking Discuss the Ethical Considerations and Responsibilities of Documentary Filmmakers When Portraying Real-life Events and Subjects
Authors: Batatunde Kolawole
Abstract:
Documentary filmmaking stands as a distinctive medium within the cinematic realm, commanding a unique responsibility the portrayal of real-life events and subjects. This research delves into the profound ethical considerations and responsibilities that documentary filmmakers shoulder as they embark on the quest to unveil truth and weave compelling narratives. In the exploration, they embark on a comprehensive review of ethical frameworks and real-world case studies, illuminating the intricate web of challenges that documentarians confront. These challenges encompass an array of ethical intricacies, from securing informed consent to safeguarding privacy, maintaining unwavering objectivity, and sidestepping the snares of narrative manipulation when crafting stories from reality. Furthermore, they dissect the contemporary ethical terrain, acknowledging the emergence of novel dilemmas in the digital age, such as deepfakes and digital alterations. Through a meticulous analysis of ethical quandaries faced by distinguished documentary filmmakers and their strategies for ethical navigation, this study offers invaluable insights into the evolving role of documentaries in molding public discourse. They underscore the indispensable significance of transparency, integrity, and an indomitable commitment to encapsulating the intricacies of reality within the realm of ethical documentary filmmaking. In a world increasingly reliant on visual narratives, an understanding of the subtle ethical dimensions of documentary filmmaking holds relevance not only for those behind the camera but also for the diverse audiences who engage with and interpret the realities unveiled on screen. This research stands as a rigorous examination of the moral compass that steers this potent form of cinematic expression. It emphasizes the capacity of ethical documentary filmmaking to enlighten, challenge, and inspire, all while unwaveringly upholding the core principles of truthfulness and respect for the human subjects under scrutiny. Through this holistic analysis, they illuminate the enduring significance of upholding ethical integrity while uncovering the truths that shape our world. Ethical documentary filmmaking, as exemplified by "Rape" and countless other powerful narratives, serves as a testament to the enduring potential of cinema to inform, challenge, and drive meaningful societal discourse.Keywords: filmmaking, documentary, human right, film
Procedia PDF Downloads 681517 Community Observatory for Territorial Information Control and Management
Authors: A. Olivi, P. Reyes Cabrera
Abstract:
Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.Keywords: accessibility, ageing, city, digital technology, local governance
Procedia PDF Downloads 1331516 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text
Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert
Abstract:
This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies
Procedia PDF Downloads 171