Search results for: digital waveguides
1772 Restoration Process of Kastamonu - Tufekciler Village Houses for Potential Eco-Tourism Purposes
Authors: Turkan Sultan Yasar Ismail, Mehmet Cetin, M. Danial Ismail, Hakan Sevik
Abstract:
Nowadays, there is a need for the real world to be translated to the virtual environment by three-dimensional visualisation for restoration and promotional modelling of historic sites in protected areas. Visualisation models have also become the very important basis for the creation of three-dimensional Geographic Information System. The protection of historical and cultural heritage and documenting in Turkey as well as all over the world is an important issue. This heritage is a bridge between the past and the future of humanity. Many historical and cultural heritages suffer neglect and for reasons arising from natural causes. This is to determine the current status of the work and documenting information from the selected buildings. This process is important for their conservation and renovation work that might be done in the future. Kastamonu city is one of the historical cities in Turkey with a number of heritage buildings. However, Tufekciler Village is not visited and famous even though it includes several historical buildings and peaceful landscape. Digital terrestrial photogrammetry is one of the most important methods used in the documentation of cultural and historical heritage. Firstly, measurements were made primarily around creating polygon mesh and 3D model drawings of the structures to be modelled on images with the move to digital media such as picture size and by subsequent visualisation process. Secondly, a restoration project is offered to the village with the concept of eco-tourism with all scales such as, interior space to landscape design.Keywords: eco-tourism, restoration, sustainability, cultural village
Procedia PDF Downloads 3521771 Efficient Credit Card Fraud Detection Based on Multiple ML Algorithms
Authors: Neha Ahirwar
Abstract:
In the contemporary digital era, the rise of credit card fraud poses a significant threat to both financial institutions and consumers. As fraudulent activities become more sophisticated, there is an escalating demand for robust and effective fraud detection mechanisms. Advanced machine learning algorithms have become crucial tools in addressing this challenge. This paper conducts a thorough examination of the design and evaluation of a credit card fraud detection system, utilizing four prominent machine learning algorithms: random forest, logistic regression, decision tree, and XGBoost. The surge in digital transactions has opened avenues for fraudsters to exploit vulnerabilities within payment systems. Consequently, there is an urgent need for proactive and adaptable fraud detection systems. This study addresses this imperative by exploring the efficacy of machine learning algorithms in identifying fraudulent credit card transactions. The selection of random forest, logistic regression, decision tree, and XGBoost for scrutiny in this study is based on their documented effectiveness in diverse domains, particularly in credit card fraud detection. These algorithms are renowned for their capability to model intricate patterns and provide accurate predictions. Each algorithm is implemented and evaluated for its performance in a controlled environment, utilizing a diverse dataset comprising both genuine and fraudulent credit card transactions.Keywords: efficient credit card fraud detection, random forest, logistic regression, XGBoost, decision tree
Procedia PDF Downloads 671770 Single Chip Controller Design for Piezoelectric Actuators with Mixed Signal FPGA
Authors: Han-Bin Park, Taesam Kang, SunKi Hong, Jeong Hoi Gu
Abstract:
The piezoelectric material is being used widely for actuators due to its large power density with simple structure. It can generate a larger force than the conventional actuators with the same size. Furthermore, the response time of piezoelectric actuators is very short, and thus, it can be used for very fast system applications with compact size. To control the piezoelectric actuator, we need analog signal conditioning circuits as well as digital microcontrollers. Conventional microcontrollers are not equipped with analog parts and thus the control system becomes bulky compared with the small size of the piezoelectric devices. To overcome these weaknesses, we are developing one-chip micro controller that can handle analog and digital signals simultaneously using mixed signal FPGA technology. We used the SmartFusion™ FPGA device that integrates ARM®Cortex-M3, analog interface and FPGA fabric in a single chip and offering full customization. It gives more flexibility than traditional fixed-function microcontrollers with the excessive cost of soft processor cores on traditional FPGAs. In this paper we introduce the design of single chip controller using mixed signal FPGA, SmartFusion™[1] device. To demonstrate its performance, we implemented a PI controller for power driving circuit and a 5th order H-infinity controller for the system with piezoelectric actuator in the FPGA fabric. We also demonstrated the regulation of a power output and the operation speed of a 5th order H-infinity controller.Keywords: mixed signal FPGA, PI control, piezoelectric actuator, SmartFusion™
Procedia PDF Downloads 5201769 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking
Authors: Jonas Colin
Abstract:
Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.Keywords: chatbot, GPT 3.5, metacognition, symbiose
Procedia PDF Downloads 701768 Perceived Seriousness of Cybercrime Types: A Comparison across Gender
Authors: Suleman Ibrahim
Abstract:
Purpose: The research is seeking people's perceptions on cybercrime issues, rather than their knowledge of the facts. Unlike the Tripartite Cybercrime Framework (TCF), the binary models are ill-equipped to differentiate between cyber fraud (a socioeconomic crime) and cyber bullying or cyber stalking (psychosocial cybercrimes). Whilst the binary categories suggested that digital crimes are dichotomized: (i.e. cyber-enabled and cyber-dependent), the TCF, recently proposed, argued that cybercrimes can be conceptualized into three groups: socioeconomic, psychosocial and geopolitical. Concomitantly, as regards to the experience/perceptions of cybercrime, the TCF’s claim requires substantiation beyond its theoretical realm. Approach/Methodology: This scholar endeavor framed with the TCF, deploys a survey method to explore the experience of cybercrime across gender. Drawing from over 400 participants in the UK, this study aimed to contrast the differential perceptions/experiences of socioeconomic cybercrime (e.g. cyber fraud) and psychological cybercrime (e.g. cyber bullying and cyber stalking) across gender. Findings: The results revealed that cyber stalking was rated as least serious of the different digital crime categories. Further revealed that female participants judged all types of cybercrimes as more serious than male participants, with the exception of socioeconomic cybercrime – cyber fraud. This distinction helps to emphasize that gender cultures and nuances not only apply both online and offline, it emphasized the utilitarian value of the TCF. Originality: Unlike existing data, this study has contrasted the differential perceptions and experience of socioeconomic and psychosocial cybercrimes with more refined variables.Keywords: gender variations, psychosocial cybercrime, socioeconomic cybercrime, tripartite cybercrime framework
Procedia PDF Downloads 3891767 An Analysis of Social Media Use regarding Foodways by University Students: The Case of Sakarya University
Authors: Kübra Yüzüncüyıl, Aytekin İşman, Berkay Buluş
Abstract:
In the last quarter of the 20th century, Food Studies was emerged as an interdisciplinary program. It seeks to develop a critical perspective on sociocultural meanings of food. The notion of food has been related with certain social and cultural values throughout history. In today’s society, with the rise of new media technologies, cultural structure have been digitized. Food culture in this main, is also endowed with digital codes. In particular, social media has been integrated into foodways. This study attempts to examine the gratifications that individuals obtain from social media use on foodways. In the first part of study, the relationship between food culture and digital culture is examined. Secondly, theoretical framework and research method of the study are explained. In order to achieve the particular aim of study, Uses and Gratifications Theory is adopted as conceptual framework. Conventional gratification categories are redefined in new media terms. After that, the relation between redefined categories and foodways is uncovered. Due to its peculiar context, this study follows a quantitative research method. By conducting pre-interviews and factor analysis, a peculiar survey is developed. The sample of study is chosen among 405 undergraduate communication faculty students of Sakarya University by proportionate stratification sampling method. In the analysis of the collected data, statistical methods One-Way ANOVA, Independent Samples T-test, and Tuckey Honest Significant Difference Test, Post Hoc Test are used.Keywords: food studies, food communication, new media, communication
Procedia PDF Downloads 1921766 Information Technology Approaches to Literature Text Analysis
Authors: Ayse Tarhan, Mustafa Ilkan, Mohammad Karimzadeh
Abstract:
Science was considered as part of philosophy in ancient Greece. By the nineteenth century, it was understood that philosophy was very inclusive and that social and human sciences such as literature, history, and psychology should be separated and perceived as an autonomous branch of science. The computer was also first seen as a tool of mathematical science. Over time, computer science has grown by encompassing every area in which technology exists, and its growth compelled the division of computer science into different disciplines, just as philosophy had been divided into different branches of science. Now there is almost no branch of science in which computers are not used. One of the newer autonomous disciplines of computer science is digital humanities, and one of the areas of digital humanities is literature. The material of literature is words, and thanks to the software tools created using computer programming languages, data that a literature researcher would need months to complete, can be achieved quickly and objectively. In this article, three different tools that literary researchers can use in their work will be introduced. These studies were created with the computer programming languages Python and R and brought to the world of literature. The purpose of introducing the aforementioned studies is to set an example for the development of special tools or programs on Ottoman language and literature in the future and to support such initiatives. The first example to be introduced is the Stylometry tool developed with the R language. The other is The Metrical Tool, which is used to measure data in poems and was developed with Python. The latest literature analysis tool in this article is Voyant Tools, which is a multifunctional and easy-to-use tool.Keywords: DH, literature, information technologies, stylometry, the metrical tool, voyant tools
Procedia PDF Downloads 1511765 Comparative Analysis of Smart City Development: Assessing the Resilience and Technological Advancement in Singapore and Bucharest
Authors: Sînziana Iancu
Abstract:
In an era marked by rapid urbanization and technological advancement, the concept of smart cities has emerged as a pivotal solution to address the complex challenges faced by urban centres. As cities strive to enhance the quality of life for their residents, the development of smart cities has gained prominence. This study embarks on a comparative analysis of two distinct smart city models, Singapore and Bucharest, to assess their resilience and technological advancements. The significance of this study lies in its potential to provide valuable insights into the strategies, strengths, and areas of improvement in smart city development, ultimately contributing to the advancement of urban planning and sustainability. Methodologies: This comparative study employs a multifaceted approach to comprehensively analyse the smart city development in Singapore and Bucharest: * Comparative Analysis: A systematic comparison of the two cities is conducted, focusing on key smart city indicators, including digital infrastructure, integrated public services, urban planning and sustainability, transportation and mobility, environmental monitoring, safety and security, innovation and economic resilience, and community engagement; * Case Studies: In-depth case studies are conducted to delve into specific smart city projects and initiatives in both cities, providing real-world examples of their successes and challenges; * Data Analysis: Official reports, statistical data, and relevant publications are analysed to gather quantitative insights into various aspects of smart city development. Major Findings: Through a comprehensive analysis of Singapore and Bucharest's smart city development, the study yields the following major findings: * Singapore excels in digital infrastructure, integrated public services, safety, and innovation, showcasing a high level of resilience across these domains; * Bucharest is in the early stages of smart city development, with notable potential for growth in digital infrastructure and community engagement.; * Both cities exhibit a commitment to sustainable urban planning and environmental monitoring, with room for improvement in integrating these aspects into everyday life; * Transportation and mobility solutions are a priority for both cities, with Singapore having a more advanced system, while Bucharest is actively working on improving its transportation infrastructure; * Community engagement, while important, requires further attention in both cities to enhance the inclusivity of smart city initiatives. Conclusion: In conclusion, this study serves as a valuable resource for urban planners, policymakers, and stakeholders in understanding the nuances of smart city development and resilience. While Singapore stands as a beacon of success in various smart city indicators, Bucharest demonstrates potential and a willingness to adapt and grow in this domain. As cities worldwide embark on their smart city journeys, the lessons learned from Singapore and Bucharest provide invaluable insights into the path toward urban sustainability and resilience in the digital age.Keywords: bucharest, resilience, Singapore, smart city
Procedia PDF Downloads 691764 Digital Literacy, Assessment and Higher Education
Authors: James Moir
Abstract:
Recent evidence suggests that academic staff face difficulties in applying new technologies as a means of assessing higher order assessment outcomes such as critical thinking, problem solving and creativity. Although higher education institutional mission statements and course unit outlines purport the value of these higher order skills there is still some question about how well academics are equipped to design curricula and, in particular, assessment strategies accordingly. Despite a rhetoric avowing the benefits of these higher order skills, it has been suggested that academics set assessment tasks up in such a way as to inadvertently lead students on the path towards lower order outcomes. This is a controversial claim, and one that this papers seeks to explore and critique in terms of challenging the conceptual basis of assessing higher order skills through new technologies. It is argued that the use of digital media in higher education is leading to a focus on students’ ability to use and manipulate of these products as an index of their flexibility and adaptability to the demands of the knowledge economy. This focus mirrors market flexibility and encourages programmes and courses of study to be rhetorically packaged as such. Curricular content has become a means to procure more or less elaborate aggregates of attributes. Higher education is now charged with producing graduates who are entrepreneurial and creative in order to drive forward economic sustainability. It is argued that critical independent learning can take place through the democratisation afforded by cultural and knowledge digitization and that assessment needs to acknowledge the changing relations between audience and author, expert and amateur, creator and consumer.Keywords: higher education, curriculum, new technologies, assessment, higher order skills
Procedia PDF Downloads 3751763 Convergence of Media in New Era
Authors: Mohamad Reza Asariha
Abstract:
The development and extension of modern communication innovations at an extraordinary speed has caused crucial changes in all financial, social, social and political areas of the world. The improvement of toady and cable innovations, in expansion to expanding the generation and dissemination needs of worldwide programs; the financial defense made it more appealing. The alter of the administration of mechanical economy to data economy and benefit economy in created nations brought approximately uncommon advancements within the standards of world exchange and as a result, it caused the extension of media organizations in outside measurements, and the advancement of financial speculations in many Asian nations, beside the worldwide demand for the utilization of media merchandise, made new markets, and the media both within the household scene of the nations and within the universal field. Universal and financial are of great significance and have and viable and compelling nearness within the condition of picking up, keeping up and expanding financial control and riches within the world. Moreover, mechanical progresses and mechanical joining are critical components in media auxiliary alter. This auxiliary alter took put beneath the impact of digitalization. That’s, the method that broke the boundaries between electronic media administrations. Until presently, the direction of mass media was totally subordinate on certain styles of data transmission that were for the most part utilized. Digitization made it conceivable for any content to be effortlessly transmitted through distinctive electronic transmission styles, and this media merging has had clear impacts on media approaches and the way mass media are controlled.Keywords: media, digital era, digital ages, media convergence
Procedia PDF Downloads 741762 Implementation of Language Policy in a Swedish Multicultural Early Childhood School: A Development Project
Authors: Carina Hermansson
Abstract:
This presentation focuses a development project aiming at developing and documenting the steps taken at a multilingual, multicultural K-5 school, with the aim to improve the achievement levels of the pupils by focusing language and literacy development across the schedule in a digital classroom, and in all units of the school. This pre-formulated aim, thus, may be said to adhere to neoliberal educational and accountability policies in terms of its focus on digital learning, learning results, and national curriculum standards. In particular the project aimed at improving the collaboration between the teachers, the leisure time unit, the librarians, the mother tongue teachers and bilingual study counselors. This is a school environment characterized by cultural, ethnic, linguistic, and professional pluralization. The overarching aims of the research project were to scrutinize and analyze the factors enabling and obstructing the implementation of the Language Policy in a digital classroom. Theoretical framework: We apply multi-level perspectives in the analyses inspired by Uljens’ ideas about interactive and interpersonal first order (teacher/students) and second order(principal/teachers and other staff) educational leadership as described within the framework of discursive institutionalism, when we try to relate the Language Policy, educational policy, and curriculum with the administrative processes. Methodology/research design: The development project is based on recurring research circles where teachers, leisure time assistants, mother tongue teachers and study counselors speaking the mother tongue of the pupils together with two researchers discuss their digital literacy practices in the classroom. The researchers have in collaboration with the principal developed guidelines for the work, expressed in a Language Policy document. In our understanding the document is, however, only a part of the concept, the actions of the personnel and their reflections on the practice constitute the major part of the development project. One and a half years out of three years have now passed and the project has met with a row of difficulties which shed light on factors of importance for the progress of the development project. Field notes and recordings from the research circles, a survey with the personnel, and recorded group interviews provide data on the progress of the project. Expected conclusions: The problems experienced deal with leadership, curriculum, interplay between aims, technology, contents and methods, the parents as customers taking their children to other schools, conflicting values, and interactional difficulties, that is, phenomena on different levels, ranging from school to a societal level, as for example teachers being substituted as a result of the marketization of schools. Also underlying assumptions from actors at different levels create obstacles. We find this study and the problems we are facing utterly important to share and discuss in an era with a steady flow of refugees arriving in the Nordic countries.Keywords: early childhood education, language policy, multicultural school, school development project
Procedia PDF Downloads 1451761 The Role of Libraries in the Context of Indian Knowledge Based Society
Authors: Sanjeev Sharma
Abstract:
We are living in the information age. Information is not only important to an individual but also to researchers, scientists, academicians and all others who are doing work in their respective fields. The 21st century which is also known as the electronic era has brought several changes in the mechanism of the libraries in their working environment. In the present scenario, acquisition of information resources and implementation of new strategies have brought a revolution in the library’s structures and their principles. In the digital era, the role of the library has become important as new information is coming at every minute. The knowledge society wants to seek information at their desk. The libraries are managing electronic services and web-based information sources constantly in a democratic way. The basic objective of every library is to save the time of user which is based on the quality and user-orientation of services. With the advancement of information communication and technology, the libraries should pay more devotion to the development trends of the information society that would help to adjust their development strategies and information needs of the knowledge society. The knowledge-based society demands to re-define the position and objectives of all the institutions which work with information, knowledge, and culture. The situation is the era of digital India is changing at a fast speed. Everyone wants information 24x7 and libraries have been recognized as one of the key elements for open access to information, which is crucial not only to individual but also to democratic knowledge-based information society. Libraries are especially important now a day the whole concept of education is focusing more and more independent e-learning and their acting. The citizens of India must be able to find and use the relevant information. Here we can see libraries enter the stage: The essential features of libraries are to acquire, organize, store and retrieve for use and preserve publicly available material irrespective of the print as well as non-print form in which it is packaged in such a way that, when it is needed, it can be found and put to use.Keywords: knowledge, society, libraries, culture
Procedia PDF Downloads 1401760 An Inductive Study of Pop Culture Versus Visual Art: Redefined from the Lens of Censorship in Bangladesh
Authors: Ahmed Tahsin Shams
Abstract:
The right to dissent through any form of art has been facing challenges through various strict legal measures, particularly since 2018 when the Government of Bangladesh passed the Digital Security Act 2018 (DSA). Therefore, the references to ‘popular’ culture mostly include mainstream religious and national festivals and exclude critical intellectual representation of specific political allusions in any form of storytelling: whether wall art or fiction writing, since the post-DSA period in Bangladesh. Through inductive quantitative and qualitative methodological approaches, this paper aims to study the pattern of censorship, detention or custodial tortures against artists and the banning approach by the Bangladeshi government in the last five years, specifically against static visual arts, i.e., cartoon and wall art. The pattern drawn from these data attempts to redefine the popular notion of ‘pop culture’ as an unorganized folk or mass culture. The results also hypothesize how the post-DSA period forcefully constructs ‘pop culture’ as a very organized repetitive deception of enlightenment or entertainment. Thus the argument theorizes that this censoring trend is a fascist approach making the artists subaltern. So, in this socio-political context, these two similar and overlapping elements: culture and art, are vastly separated in two streams: the former being appreciated by the power, and the latter is a fearful concern for the power. Therefore, the purpose of art also shifts from entertainment to an act of rebellion, adding more layers to the new postmodern definition of ‘pop culture.’Keywords: popular culture, visual arts, censoring trend, fascist approach, subaltern, digital security act
Procedia PDF Downloads 771759 Using Building Information Modelling to Mitigate Risks Associated with Health and Safety in the Construction and Maintenance of Infrastructure Assets
Authors: Mohammed Muzafar, Darshan Ruikar
Abstract:
BIM, an acronym for Building Information Modelling relates to the practice of creating a computer generated model which is capable of displaying the planning, design, construction and operation of a structure. The resulting simulation is a data-rich, object-oriented, intelligent and parametric digital representation of the facility, from which views and data, appropriate to various users needs can be extracted and analysed to generate information that can be used to make decisions and to improve the process of delivering the facility. BIM also refers to a shift in culture that will influence the way the built environment and infrastructure operates and how it is delivered. One of the main issues of concern in the construction industry at present in the UK is its record on Health & Safety (H&S). It is, therefore, important that new technologies such as BIM are developed to help improve the quality of health and safety. Historically the H&S record of the construction industry in the UK is relatively poor as compared to the manufacturing industries. BIM and the digital environment it operates within now allow us to use design and construction data in a more intelligent way. It allows data generated by the design process to be re-purposed and contribute to improving efficiencies in other areas of a project. This evolutionary step in design is not only creating exciting opportunities for the designers themselves but it is also creating opportunity for every stakeholder in any given project. From designers, engineers, contractors through to H&S managers, BIM is accelerating a cultural change. The paper introduces the concept behind a research project that mitigates the H&S risks associated with the construction, operation and maintenance of assets through the adoption of BIM.Keywords: building information modeling, BIM levels, health, safety, integration
Procedia PDF Downloads 2541758 Social Media Resignation the Only Way to Protect User Data and Restore Cognitive Balance, a Literature Review
Authors: Rajarshi Motilal
Abstract:
The birth of the Internet and the rise of social media marked an important chapter in the history of humankind. Often termed the fourth scientific revolution, the Internet has changed human lives and cognisance. The birth of Web 2.0, followed by the launch of social media and social networking sites, added another milestone to these technological advancements where connectivity and influx of information became dominant. With billions of individuals using the internet and social media sites in the 21st century, “users” became “consumers”, and orthodox marketing reshaped itself to digital marketing. Furthermore, organisations started using sophisticated algorithms to predict consumer purchase behaviour and manipulate it to sustain themselves in such a competitive environment. The rampant storage and analysis of individual data became the new normal, raising many questions about data privacy. The excessive usage of the Internet among individuals brought in other problems of them becoming addicted to it, scavenging for societal approval and instant gratification, subsequently leading to a collective dualism, isolation, and finally, depression. This study aims to determine the relationship between social media usage in the modern age and the rise of psychological and cognitive imbalances in human minds. The literature review is positioned timely as an addition to the existing work at a time when the world is constantly debating on whether social media resignation is the only way to protect user data and restore the decaying cognitive balance.Keywords: social media, digital marketing, consumer behaviour, internet addiction, data privacy
Procedia PDF Downloads 761757 Benefits of Gamification in Agile Software Project Courses
Authors: Nina Dzamashvili Fogelström
Abstract:
This paper examines concepts of Game-Based Learning and Gamification. Conducted literature survey found an increased interest in the academia in these concepts, limited evidence of a positive effect on student motivation and academic performance, but also certain scepticism for adding games to traditional educational activities. A small-scale empirical study presented in this paper aims to evaluate student experience and usefulness of GameBased Learning and Gamification for a better understanding of the threshold concepts in software engineering project courses. The participants of the study were 22 second year students from bachelor’s program in software engineering at Blekinge Institute of Technology. As a part of the course instruction, the students were introduced to a digital game specifically designed to simulate agile software project. The game mechanics were designed as to allow manipulation of the agile concept of team velocity. After the application of the game, the students were surveyed to measure the degree of a perceived increase in understanding of the studied threshold concept. The students were also asked whether they would like to have games included in their education. The results show that majority of the students found the game helpful in increasing their understanding of the threshold concept. Most of the students have indicated that they would like to see games included in their education. These results are encouraging. Since the study was of small scale and based on convenience sampling, more studies in the area are recommended.Keywords: agile development, gamification, game based learning, digital games, software engineering, threshold concepts
Procedia PDF Downloads 1671756 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels
Authors: Tal Remez, Or Litany, Alex Bronstein
Abstract:
The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.Keywords: binary pixels, maximum likelihood, neural networks, sparse coding
Procedia PDF Downloads 2021755 Preserving Digital Arabic Text Integrity Using Blockchain Technology
Authors: Zineb Touati Hamad, Mohamed Ridda Laouar, Issam Bendib
Abstract:
With the massive development of technology today, the Arabic language has gained a prominent position among the languages most used for writing articles, expressing opinions, and also for citing in many websites, defying its growing sensitivity in terms of structure, language skills, diacritics, writing methods, etc. In the context of the spread of the Arabic language, the Holy Quran represents the most prevalent Arabic text today in many applications and websites for citation purposes or for the reading and learning rituals. The Quranic verses / surahs are published quickly and without cost, which may cause great concern to ensure the safety of the content from tampering and alteration. To protect the content of texts from distortion, it is necessary to refer to the original database and conduct a comparison process to extract the percentage of distortion. The disadvantage of this method is that it takes time, in addition to the lack of any guarantee on the integrity of the database itself as it belongs to one central party. Blockchain technology today represents the best way to maintain immutable content. Blockchain is a distributed database that stores information in blocks linked to each other through encryption, where the modification of each block can be easily known. To exploit these advantages, we seek in this paper to justify the use of this technique in preserving the integrity of Arabic texts sensitive to change by building a decentralized framework to authenticate and verify the integrity of the digital Quranic verses/surahs spread on websites.Keywords: arabic text, authentication, blockchain, integrity, quran, verification
Procedia PDF Downloads 1641754 Language Inequalities in the Algerian Public Space: A Semiotic Landscape Analysis
Authors: Sarah Smail
Abstract:
Algeria has been subject to countless conquests and invasions that resulted in having a diverse linguistic repertoire. The sociolinguistic situation of the country made linguistic landscape analysis pertinent. This in fact, has led to the growth of diverse linguistic landscape studies that mainly focused on identifying the sociolinguistic situation of the country through shop names analysis. The present research adds to the existing literature by offering another perspective to the analysis of signs by combining the physical and digital semiotic landscape. The powerful oil, gas and agri-food industries in Algeria make it interesting to focus on the commodification of natural resources for the sake of identifying the language and semiotic resources deployed in the Algerian public scene in addition to the identification of the visibility of linguistic inequalities and minorities in the business domain. The study discusses the semiotic landscape of three trade cities: Bejaia, Setif and Hassi-Messaoud. In addition to interviews conducted with business owners and graphic designers and questionnaires with business employees. Withal, the study relies on Gorter’s multilingual inequalities in public space (MIPS) model (2021) and Irvine and Gal’s language ideology and linguistic differentiation (2000). The preliminary results demonstrate the sociolinguistic injustice existing in the business domain, e.g., the exclusion of the official languages, the dominance of foreign languages, and the excessive use of the roman script.Keywords: semiotic landscaping, digital scapes, language commodification, linguistic inequalities, business signage
Procedia PDF Downloads 1081753 Research on Innovation Service based on Science and Technology Resources in Beijing-Tianjin-Hebei
Authors: Runlian Miao, Wei Xie, Hong Zhang
Abstract:
In China, Beijing-Tianjin-Hebei is regarded as a strategically important region because itenjoys highest development in economic development, opening up, innovative capacity and andpopulation. Integrated development of Beijing-Tianjin-Hebei region is increasingly emphasized by the government recently years. In 2014, it has ascended to one of the national great development strategies by Chinese central government. In 2015, Coordinated Development Planning Compendium for Beijing-Tianjin-Hebei Region was approved. Such decisions signify Beijing-Tianjin-Hebei region would lead innovation-driven economic development in China. As an essential factor to achieve national innovation-driven development and significant part of regional industry chain, the optimization of science and technology resources allocation will exert great influence to regional economic transformation and upgrading and innovation-driven development. However, unbalanced distribution, poor sharing of resources and existence of information isolated islands have contributed to different interior innovation capability, vitality and efficiency, which impeded innovation and growth of the whole region. Under such a background, to integrate and vitalize regional science and technology resources and then establish high-end, fast-responding and precise innovation service system basing on regional resources, would be of great significance for integrated development of Beijing-Tianjin-Hebei region and even handling of unbalanced and insufficient development problem in China. This research uses the method of literature review and field investigation and applies related theories prevailing home and abroad, centering service path of science and technology resources for innovation. Based on the status quo and problems of regional development of Beijing-Tianjin-Hebei, theoretically, the author proposed to combine regional economics and new economic geography to explore solution to problem of low resource allocation efficiency. Further, the author puts forward to applying digital map into resource management and building a platform for information co-building and sharing. At last, the author presents the thought to establish a specific service mode of ‘science and technology plus digital map plus intelligence research plus platform service’ and suggestion on co-building and sharing mechanism of 3 (Beijing, Tianjin and Hebei ) plus 11 (important cities in Hebei Province).Keywords: Beijing-Tianjin-Hebei, science and technology resources, innovation service, digital platform
Procedia PDF Downloads 1611752 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon
Authors: Laura M. F. Bertens
Abstract:
There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone
Procedia PDF Downloads 1271751 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 1111750 Digital Image Correlation: Metrological Characterization in Mechanical Analysis
Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano
Abstract:
The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.Keywords: accuracy, deformation, image correlation, mechanical analysis
Procedia PDF Downloads 3111749 Dem Based Surface Deformation in Jhelum Valley: Insights from River Profile Analysis
Authors: Syed Amer Mahmood, Rao Mansor Ali Khan
Abstract:
This study deals with the remote sensing analysis of tectonic deformation and its implications to understand the regional uplift conditions in the lower Jhelum and eastern Potwar. Identification and mapping of active structures is an important issue in order to assess seismic hazards and to understand the Quaternary deformation of the region. Digital elevation models (DEMs) provide an opportunity to quantify land surface geometry in terms of elevation and its derivatives. Tectonic movement along the faults is often reflected by characteristic geomorphological features such as elevation, stream offsets, slope breaks and the contributing drainage area. The river profile analysis in this region using SRTM digital elevation model gives information about the tectonic influence on the local drainage network. The steepness and concavity indices have been calculated by power law of scaling relations under steady state conditions. An uplift rate map is prepared after carefully analysing the local drainage network showing uplift rates in mm/year. The active faults in the region control local drainages and the deflection of stream channels is a further evidence of the recent fault activity. The results show variable relative uplift conditions along MBT and Riasi and represent a wonderful example of the recency of uplift, as well as the influence of active tectonics on the evolution of young orogens.Keywords: quaternary deformation, SRTM DEM, geomorphometric indices, active tectonics and MBT
Procedia PDF Downloads 3481748 Problems and Challenges in Social Economic Research after COVID-19: The Case Study of Province Sindh
Authors: Waleed Baloch
Abstract:
This paper investigates the problems and challenges in social-economic research in the case study of the province of Sindh after the COVID-19 pandemic; the pandemic has significantly impacted various aspects of society and the economy, necessitating a thorough examination of the resulting implications. The study also investigates potential strategies and solutions to mitigate these challenges, ensuring the continuation of robust social and economic research in the region. Through an in-depth analysis of data and interviews with key stakeholders, the study reveals several significant findings. Firstly, researchers encountered difficulties in accessing primary data due to disruptions caused by the pandemic, leading to limitations in the scope and accuracy of their studies. Secondly, the study highlights the challenges faced in conducting fieldwork, such as restrictions on travel and face-to-face interactions, which impacted the ability to gather reliable data. Lastly, the research identifies the need for innovative research methodologies and digital tools to adapt to the new research landscape brought about by the pandemic. The study concludes by proposing recommendations to address these challenges, including utilizing remote data collection methods, leveraging digital technologies for data analysis, and establishing collaborations among researchers to overcome resource constraints. By addressing these issues, researchers in the social economic field can effectively navigate the post-COVID-19 research landscape, facilitating a deeper understanding of the socioeconomic impacts and facilitating evidence-based policy interventions.Keywords: social economic, sociology, developing economies, COVID-19
Procedia PDF Downloads 631747 The Model of Open Cooperativism: The Case of Open Food Network
Authors: Vangelis Papadimitropoulos
Abstract:
This paper is part of the research program “Techno-Social Innovation in the Collaborative Economy”, funded by the Hellenic Foundation for Research and Innovation (H.F.R.I.) for the years 2022-2024. The paper showcases the Open Food Network (OFN) as an open-sourced digital platform supporting short food supply chains in local agricultural production and consumption. The paper outlines the research hypothesis, the theoretical framework, and the methodology of research as well as the findings and conclusions. Research hypothesis: The model of open cooperativism as a vehicle for systemic change in the agricultural sector. Theoretical framework: The research reviews the OFN as an illustrative case study of the three-zoned model of open cooperativism. The OFN is considered a paradigmatic case of the model of open cooperativism inasmuch as it produces commons, it consists of multiple stakeholders including ethical market entities, and it is variously supported by local authorities across the globe, the latter prefiguring the mini role of a partner state. Methodology: Research employs Ernesto Laclau and Chantal Mouffe’s discourse analysis -elements, floating signifiers, nodal points, discourses, logics of equivalence and difference- to analyse the breadth of empirical data gathered through literature review, digital ethnography, a survey, and in-depth interviews with core OFN members. Discourse analysis classifies OFN floating signifiers, nodal points, and discourses into four themes: value proposition, governance, economic policy, and legal policy. Findings: OFN floating signifiers align around the following nodal points and discourses: “digital commons”, “short food supply chains”, “sustainability”, “local”, “the elimination of intermediaries” and “systemic change”. The current research identifies a lack of common ground of what the discourse of “systemic change” signifies on the premises of the OFN’s value proposition. The lack of a common mission may be detrimental to the formation of a common strategy that would be perhaps deemed necessary to bring about systemic change in agriculture. Conclusions: Drawing on Laclau and Mouffe’s discourse theory of hegemony, research introduces a chain of equivalence by aligning discourses such as “agro-ecology”, “commons-based peer production”, “partner state” and “ethical market entities” under the model of open cooperativism, juxtaposed against the current hegemony of neoliberalism, which articulates discourses such as “market fundamentalism”, “privatization”, “green growth” and “the capitalist state” to promote corporatism and entrepreneurship. Research makes the case that for OFN to further agroecology and challenge the current hegemony of industrial agriculture, it is vital that it opens up its supply chains into equivalent sectors of the economy, civil society, and politics to form a chain of equivalence linking together ethical market entities, the commons and a partner state around the model of open cooperativism.Keywords: sustainability, the digital commons, open cooperativism, innovation
Procedia PDF Downloads 721746 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 1351745 The Effect of 12-Week Pilates Training on Flexibility and Level of Perceived Exertion of Back Muscles among Karate Players
Authors: Seyedeh Nahal Sadiri, Ardalan Shariat
Abstract:
Developing flexibility, by using pilates, would be useful for karate players by reducing the stiffness of muscles and tendons. This study aimed to determine the effects of 12-week pilates training on flexibility, and level of perceived exertion of back muscles among karate players. In this experimental study, 29 male karate players (age: 16-18 years) were randomized to pilates (n=15), and control (n=14) groups and the assessments were done in baseline and after 12-week intervention. Both groups completed 12-week of intervention (2 hours of training, 3 times weekly). The experimental group performed 30 minutes pilates within their warm-up and preparation phase, where the control group only attended their usual karate training. Digital backward flexmeter was used to evaluate the trunk extensors flexibility, and digital forward flexmeter was used to measure the trunk flexors flexibility. Borg CR-10 Scale was also used to determine the perceived exertion of back muscles. Independent samples t-test and paired sample t-test were used to analyze the data. There was a significant difference between the mean score of experimental and control groups in the level of backward trunk flexibility (P < 0.05), forward trunk flexibility (P < 0.05) after 12-week intervention. The results of Borg CR-10 scale showed a significant improvement in pilates group (P < 0.05). Karate instructors, coaches, and athletes can integrate pilates exercises with karate training in order to improve the flexibility, and level of perceived exertion of back muscles.Keywords: pilates training, karate players, flexibility, Borg CR-10
Procedia PDF Downloads 1651744 Development of Star Image Simulator for Star Tracker Algorithm Validation
Authors: Zoubida Mahi
Abstract:
A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.Keywords: star tracker, star simulation, star detection, centroid, noise, scenario
Procedia PDF Downloads 961743 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method
Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi
Abstract:
This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method
Procedia PDF Downloads 394