Search results for: digital libraries
1808 Dem Based Surface Deformation in Jhelum Valley: Insights from River Profile Analysis
Authors: Syed Amer Mahmood, Rao Mansor Ali Khan
Abstract:
This study deals with the remote sensing analysis of tectonic deformation and its implications to understand the regional uplift conditions in the lower Jhelum and eastern Potwar. Identification and mapping of active structures is an important issue in order to assess seismic hazards and to understand the Quaternary deformation of the region. Digital elevation models (DEMs) provide an opportunity to quantify land surface geometry in terms of elevation and its derivatives. Tectonic movement along the faults is often reflected by characteristic geomorphological features such as elevation, stream offsets, slope breaks and the contributing drainage area. The river profile analysis in this region using SRTM digital elevation model gives information about the tectonic influence on the local drainage network. The steepness and concavity indices have been calculated by power law of scaling relations under steady state conditions. An uplift rate map is prepared after carefully analysing the local drainage network showing uplift rates in mm/year. The active faults in the region control local drainages and the deflection of stream channels is a further evidence of the recent fault activity. The results show variable relative uplift conditions along MBT and Riasi and represent a wonderful example of the recency of uplift, as well as the influence of active tectonics on the evolution of young orogens.Keywords: quaternary deformation, SRTM DEM, geomorphometric indices, active tectonics and MBT
Procedia PDF Downloads 3481807 Problems and Challenges in Social Economic Research after COVID-19: The Case Study of Province Sindh
Authors: Waleed Baloch
Abstract:
This paper investigates the problems and challenges in social-economic research in the case study of the province of Sindh after the COVID-19 pandemic; the pandemic has significantly impacted various aspects of society and the economy, necessitating a thorough examination of the resulting implications. The study also investigates potential strategies and solutions to mitigate these challenges, ensuring the continuation of robust social and economic research in the region. Through an in-depth analysis of data and interviews with key stakeholders, the study reveals several significant findings. Firstly, researchers encountered difficulties in accessing primary data due to disruptions caused by the pandemic, leading to limitations in the scope and accuracy of their studies. Secondly, the study highlights the challenges faced in conducting fieldwork, such as restrictions on travel and face-to-face interactions, which impacted the ability to gather reliable data. Lastly, the research identifies the need for innovative research methodologies and digital tools to adapt to the new research landscape brought about by the pandemic. The study concludes by proposing recommendations to address these challenges, including utilizing remote data collection methods, leveraging digital technologies for data analysis, and establishing collaborations among researchers to overcome resource constraints. By addressing these issues, researchers in the social economic field can effectively navigate the post-COVID-19 research landscape, facilitating a deeper understanding of the socioeconomic impacts and facilitating evidence-based policy interventions.Keywords: social economic, sociology, developing economies, COVID-19
Procedia PDF Downloads 641806 The Model of Open Cooperativism: The Case of Open Food Network
Authors: Vangelis Papadimitropoulos
Abstract:
This paper is part of the research program “Techno-Social Innovation in the Collaborative Economy”, funded by the Hellenic Foundation for Research and Innovation (H.F.R.I.) for the years 2022-2024. The paper showcases the Open Food Network (OFN) as an open-sourced digital platform supporting short food supply chains in local agricultural production and consumption. The paper outlines the research hypothesis, the theoretical framework, and the methodology of research as well as the findings and conclusions. Research hypothesis: The model of open cooperativism as a vehicle for systemic change in the agricultural sector. Theoretical framework: The research reviews the OFN as an illustrative case study of the three-zoned model of open cooperativism. The OFN is considered a paradigmatic case of the model of open cooperativism inasmuch as it produces commons, it consists of multiple stakeholders including ethical market entities, and it is variously supported by local authorities across the globe, the latter prefiguring the mini role of a partner state. Methodology: Research employs Ernesto Laclau and Chantal Mouffe’s discourse analysis -elements, floating signifiers, nodal points, discourses, logics of equivalence and difference- to analyse the breadth of empirical data gathered through literature review, digital ethnography, a survey, and in-depth interviews with core OFN members. Discourse analysis classifies OFN floating signifiers, nodal points, and discourses into four themes: value proposition, governance, economic policy, and legal policy. Findings: OFN floating signifiers align around the following nodal points and discourses: “digital commons”, “short food supply chains”, “sustainability”, “local”, “the elimination of intermediaries” and “systemic change”. The current research identifies a lack of common ground of what the discourse of “systemic change” signifies on the premises of the OFN’s value proposition. The lack of a common mission may be detrimental to the formation of a common strategy that would be perhaps deemed necessary to bring about systemic change in agriculture. Conclusions: Drawing on Laclau and Mouffe’s discourse theory of hegemony, research introduces a chain of equivalence by aligning discourses such as “agro-ecology”, “commons-based peer production”, “partner state” and “ethical market entities” under the model of open cooperativism, juxtaposed against the current hegemony of neoliberalism, which articulates discourses such as “market fundamentalism”, “privatization”, “green growth” and “the capitalist state” to promote corporatism and entrepreneurship. Research makes the case that for OFN to further agroecology and challenge the current hegemony of industrial agriculture, it is vital that it opens up its supply chains into equivalent sectors of the economy, civil society, and politics to form a chain of equivalence linking together ethical market entities, the commons and a partner state around the model of open cooperativism.Keywords: sustainability, the digital commons, open cooperativism, innovation
Procedia PDF Downloads 741805 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 1351804 The Effect of 12-Week Pilates Training on Flexibility and Level of Perceived Exertion of Back Muscles among Karate Players
Authors: Seyedeh Nahal Sadiri, Ardalan Shariat
Abstract:
Developing flexibility, by using pilates, would be useful for karate players by reducing the stiffness of muscles and tendons. This study aimed to determine the effects of 12-week pilates training on flexibility, and level of perceived exertion of back muscles among karate players. In this experimental study, 29 male karate players (age: 16-18 years) were randomized to pilates (n=15), and control (n=14) groups and the assessments were done in baseline and after 12-week intervention. Both groups completed 12-week of intervention (2 hours of training, 3 times weekly). The experimental group performed 30 minutes pilates within their warm-up and preparation phase, where the control group only attended their usual karate training. Digital backward flexmeter was used to evaluate the trunk extensors flexibility, and digital forward flexmeter was used to measure the trunk flexors flexibility. Borg CR-10 Scale was also used to determine the perceived exertion of back muscles. Independent samples t-test and paired sample t-test were used to analyze the data. There was a significant difference between the mean score of experimental and control groups in the level of backward trunk flexibility (P < 0.05), forward trunk flexibility (P < 0.05) after 12-week intervention. The results of Borg CR-10 scale showed a significant improvement in pilates group (P < 0.05). Karate instructors, coaches, and athletes can integrate pilates exercises with karate training in order to improve the flexibility, and level of perceived exertion of back muscles.Keywords: pilates training, karate players, flexibility, Borg CR-10
Procedia PDF Downloads 1651803 Development of Star Image Simulator for Star Tracker Algorithm Validation
Authors: Zoubida Mahi
Abstract:
A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.Keywords: star tracker, star simulation, star detection, centroid, noise, scenario
Procedia PDF Downloads 971802 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method
Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi
Abstract:
This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method
Procedia PDF Downloads 3951801 A 3D Bioprinting System for Engineering Cell-Embedded Hydrogels by Digital Light Processing
Authors: Jimmy Jiun-Ming Su, Yuan-Min Lin
Abstract:
Bioprinting has been applied to produce 3D cellular constructs for tissue engineering. Microextrusion printing is the most common used method. However, printing low viscosity bioink is a challenge for this method. Herein, we developed a new 3D printing system to fabricate cell-laden hydrogels via a DLP-based projector. The bioprinter is assembled from affordable equipment including a stepper motor, screw, LED-based DLP projector, open source computer hardware and software. The system can use low viscosity and photo-polymerized bioink to fabricate 3D tissue mimics in a layer-by-layer manner. In this study, we used gelatin methylacrylate (GelMA) as bioink for stem cell encapsulation. In order to reinforce the printed construct, surface modified hydroxyapatite has been added in the bioink. We demonstrated the silanization of hydroxyapatite could improve the crosslinking between the interface of hydroxyapatite and GelMA. The results showed that the incorporation of silanized hydroxyapatite into the bioink had an enhancing effect on the mechanical properties of printed hydrogel, in addition, the hydrogel had low cytotoxicity and promoted the differentiation of embedded human bone marrow stem cells (hBMSCs) and retinal pigment epithelium (RPE) cells. Moreover, this bioprinting system has the ability to generate microchannels inside the engineered tissues to facilitate diffusion of nutrients. We believe this 3D bioprinting system has potential to fabricate various tissues for clinical applications and regenerative medicine in the future.Keywords: bioprinting, cell encapsulation, digital light processing, GelMA hydrogel
Procedia PDF Downloads 1821800 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 1311799 Secure and Privacy-Enhanced Blockchain-Based Authentication System for University User Management
Authors: Ali El Ksimi
Abstract:
In today's digital academic environment, secure authentication methods are essential for managing sensitive user data, including that of students and faculty. The rise in cyber threats and data breaches has exposed the vulnerabilities of traditional authentication systems used in universities. Passwords, often the first line of defense, are particularly susceptible to hacking, phishing, and brute-force attacks. While multi-factor authentication (MFA) provides an additional layer of security, it can still be compromised and often adds complexity and inconvenience for users. As universities seek more robust security measures, blockchain technology emerges as a promising solution. Renowned for its decentralization, immutability, and transparency, blockchain has the potential to transform how user management is conducted in academic institutions. In this article, we explore a system that leverages blockchain technology specifically for managing user accounts within a university setting. The system enables the secure creation and management of accounts for different roles, such as administrators, teachers, and students. Each user is authenticated through a decentralized application (DApp) that ensures their data is securely stored and managed on the blockchain. By eliminating single points of failure and utilizing cryptographic techniques, the system enhances the security and integrity of user management processes. We will delve into the technical architecture, security benefits, and implementation considerations of this approach. By integrating blockchain into user management, we aim to address the limitations of traditional systems and pave the way for the future of digital security in education.Keywords: blockchain, university, authentication, decentralization, cybersecurity, user management, privacy
Procedia PDF Downloads 291798 Empowering the Citizens: The Potential of Zimbabwean Library and Information Science Schools in Contributing towards Socio-Economic Transformation
Authors: Collence Takaingenhamo Chisita, Munyaradzi Shoko
Abstract:
Library and Information Science Schools play significant roles in socio–economic transformation but in most cases they are downplayed or overshadowed by other institutions, and professions. Currently Zimbabwe boasts of high literacy rate in Africa and this success would have been impossible without the contributions of library schools and related institutions. Libraries and librarians are at the epicentre of socio-economic development and their role cannot be downplayed. It is out of this context that the writer will explore the extent to which library schools are contributing towards socio-economic transformation, for example, human capital development and facilitating access to information. The writer will seek to explain and clarify how LIS schools are engaged in socio-economic transformation through supporting education and culture through community engagement. The paper will examine the LIS education models, for example, general education and Technical Vocational Education and Training (TVET) or Competency Based Education and Training (CBET). It will also seek to find out how LIS Schools are contributing to the information/knowledge economy through education, training and research. The writer will also seek to find out how LIS Education is responding to socio-economic and political dynamics in Zimbabwe amidst forces of globalisation and cultural identities. Furthermore, the writer will explore the extent to which LIS education can help to reposition Zimbabwe in the global knowledge economy. The author will examine how LIS schools integrate culture and technology.Keywords: development, information/knowledge economy, culture, empowerment, collaboration, globalisation
Procedia PDF Downloads 3241797 English as a Foreign Language Teachers' Perspectives on the Workable Approaches and Challenges that Encountered them when Teaching Reading Using E-Learning
Authors: Sarah Alshehri, Messedah Alqahtani
Abstract:
Reading instruction in EFL classes is still challenging for teachers, and many students are still behind their expected level. Due to the Covid-19 pandemic, there was a shift in teaching English from face-to face to online classes. This paper will discover how the digital shift during and post pandemic has influenced English literacy instruction and what methods seem to be effective or challenging. Specifically, this paper will examine English language teachers' perspectives on the workable approaches and challenges that encountered them when teaching reading using E-Learning platform in Saudi Arabian Secondary and intermediate schools. The study explores public secondary school EFL teachers’ instructional practices and the challenges encountered when teaching reading online. Quantitative data will be collected through a 28 -item Likert type survey that will be administered to Saudi English teachers who work in public secondary and intermediate schools. The quantitative data will be analyzed using SPSS by conducting frequency distributions, descriptive statistics, reliability tests, and one-way ANOVA tests. The potential outcomes of this study will contribute to better understanding of digital literacy and technology integration in language teaching. Findings of this study can provide directions for professionals and policy makers to improve the quality of English teaching and learning. Limitations and results will be discussed, and suggestions for future directions will be offered.Keywords: EFL reading, E-learning- EFL literacy, EFL workable approaches, EFL reading instruction
Procedia PDF Downloads 1021796 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4411795 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.Keywords: airborne laser scanning, digital terrain models, filtering, forested areas
Procedia PDF Downloads 1391794 Bioinformatic Screening of Metagenomic Fosmid Libraries for Identification of Biosynthetic Pathways Derived from the Colombian Soils
Authors: María Fernanda Quiceno Vallejo, Patricia del Portillo, María Mercedes Zambrano, Jeisson Alejandro Triana, Dayana Calderon, Juan Manuel Anzola
Abstract:
Microorganisms from tropical ecosystems can be novel in terms of adaptations and conservation. Given the macrodiversity of Colombian ecosystems, it is possible that this diversity is also present in Colombian soils. Tropical soil bacteria could offer a potentially novel source of bioactive compounds. In this study we analyzed a metagenomic fosmid library constructed with tropical bacterial DNAs with the aim of understanding its underlying diversity and functional potential. 8640 clones from the fosmid library were sequenced by NANOPORE MiniOn technology, then analyzed with bioinformatic tools such as Prokka, AntiSMASH and Bagel4 in order to identify functional biosynthetic pathways in the sequences. The strains showed ample difference when it comes to biosynthetic pathways. In total we identified 4 pathways related to aryl polyene synthesis, 12 related to terpenes, 22 related to NRPs (Non ribosomal peptides), 11 related PKs (Polyketide synthases) and 7 related to RiPPs (bacteriocins). We designed primers for the metagenomic clones with the most BGCs (sample 6 and sample 2). Results show the biotechnological / pharmacological potential of tropical ecosystems. Overall, this work provides an overview of the genomic and functional potential of Colombian soil and sets the groundwork for additional exploration of tropical metagenomic sequencing.Keywords: bioactives, biosyntethic pathways, bioinformatic, bacterial gene clusters, secondary metabolites
Procedia PDF Downloads 1661793 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image
Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa
Abstract:
A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever
Procedia PDF Downloads 1201792 How Digital Empowerment Affects Dissolution of Segmentation Effect and Construction of Opinion Leaders in Isolated Communities: Ethnographic Investigation of Leprosy Rehabilitation Groups
Authors: Lin Zhang
Abstract:
The fear of leprosy has been longstanding throughout the human history. In an era where isolation is practiced as a means of epidemic prevention, the leprosy rehabilitation group has itself become an isolated community with an entrenched metaphor. In the process of new mediatization of the leprosy isolation community, what are the relations among media literacy, the leprosy internalized stigma and social support? To address the question, the “portrait” of leprosy rehabilitation group is re-delineated through two field studies in the “post-leprosy age” in 2012 and 2020, respectively. Taking an isolation community on Si’an Leprosy Island in Dongguan City, Guangdong Province, China as the study object, it is found that new media promotes the dissolution of segregation effect of the leprosy isolation community and the cultivation of opinion leaders by breaking spatial, psychological and social segregation and by building a community of village affairs and public space in the following way: the cured patients with high new media literacy, especially those who use WeChat and other applications and largely rely on new media for information, have a low level of leprosy internalized stigma and a high level of social support, and they are often the opinion leaders inside their community; on the contrary, the cured patients with low new media literacy, a high level of leprosy internalized stigma and a low level of social support are often the followers inside their community. Such effects of dissolution and construction are reflected not only in the vertical differentiation of the same individual at different times, but also in the horizontal differentiation between different individuals at the same time.Keywords: segregation, the leprosy rehabilitation group, new mediatization, digital empowerment, opinion leaders
Procedia PDF Downloads 1781791 Characterization of the Intestinal Microbiota: A Signature in Fecal Samples from Patients with Irritable Bowel Syndrome
Authors: Mina Hojat Ansari, Kamran Bagheri Lankarani, Mohammad Reza Fattahi, Ali Reza Safarpour
Abstract:
Irritable bowel syndrome (IBS) is a common bowel disorder which is usually diagnosed through the abdominal pain, fecal irregularities and bloating. Alteration in the intestinal microbial composition is implicating to inflammatory and functional bowel disorders which is recently also noted as an IBS feature. Owing to the potential importance of microbiota implication in both efficiencies of the treatment and prevention of the diseases, we examined the association between the intestinal microbiota and different bowel patterns in a cohort of subjects with IBS and healthy controls. Fresh fecal samples were collected from a total of 50 subjects, 30 of whom met the Rome IV criteria for IBS and 20 Healthy control. Total DNA was extracted and library preparation was conducted following the standard protocol for small whole genome sequencing. The pooled libraries sequenced on an Illumina Nextseq platform with a 2 × 150 paired-end read length and obtained sequences were analyzed using several bioinformatics programs. The majority of sequences obtained in the current study assigned to bacteria. However, our finding highlighted the significant microbial taxa variation among the studied groups. The result, therefore, suggests a significant association of the microbiota with symptoms and bowel characteristics in patients with IBS. These alterations in fecal microbiota could be exploited as a biomarker for IBS or its subtypes and suggest the modification of the microbiota might be integrated into prevention and treatment strategies for IBS.Keywords: irritable bowel syndrome, intestinal microbiota, small whole genome sequencing, fecal samples, Illumina
Procedia PDF Downloads 1671790 The Agroclimatic Atlas of Croatia for the Periods 1981-2010 and 1991-2020
Authors: Višnjica Vučetić, Mislav Anić, Jelena Bašić, Petra Sviličić, Ivana Tomašević
Abstract:
The Agroclimatic Atlas of Croatia (Atlas) for the periods 1981–2010 and 1991–2020 is monograph of six chapters in digital form. Detailed descriptions of particular agroclimatological data are given in separate chapters as follows: agroclimatic indices based on air temperature (degree days, Huglin heliothermal index), soil temperature, water balance components (precipitation, potential evapotranspiration, actual evapotranspiration, soil moisture content, runoff, recharge and soil moisture loss) and fire weather indices. The last chapter is a description of the digital methods for the spatial interpolations (R and GIS). The Atlas comprises textual description of the relevant climate characteristic, maps of the spatial distribution of climatological elements at 109 stations (26 stations for soil temperature) and tables of the 30-year mean monthly, seasonal and annual values of climatological parameters at 24 stations. The Atlas was published in 2021, on the seventieth anniversary of the agrometeorology development at the Meteorological and Hydrological Service of Croatia. It is intended to support improvement of sustainable system of agricultural production and forest protection from fire and as a rich source of information for agronomic and forestry experts, but also for the decision-making bodies to use it for the development of strategic plans.Keywords: agrometeorology, agroclimatic indices, soil temperature, water balance components, fire weather index, meteorological and hydrological service of Croatia
Procedia PDF Downloads 1281789 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis
Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García
Abstract:
Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis
Procedia PDF Downloads 2281788 Dental Pathologies and Diet in Pre-hispanic Populations of the Equatorial Pacific Coast: Literature Review
Authors: Ricardo Andrés Márquez Ortiz
Abstract:
Objective. The objective of this literature review is to compile updated information from studies that have addressed the association between dental pathologies and diet in prehistoric populations of the equatorial Pacific coast. Materials and method. The research carried out corresponds to a documentary study of ex post facto retrospective, historiographic and bibliometric design. A bibliographic review search was carried out in the libraries of the Colombian Institute of Anthropology and History (ICANH) and the National University of Colombia for books and articles on the archeology of the region. In addition, a search was carried out in databases and the Internet for books and articles on dental anthropology, archeology and dentistry on the relationship between dental pathologies and diet in prehistoric and current populations from different parts of the world. Conclusions. The complex societies (500 BC - 300 AD) of the equatorial Pacific coast used an agricultural system of intensive monoculture of corn (Zea mays). This form of subsistence was reflected in an intensification of dental pathologies such as dental caries, dental abscesses generated by cavities, and enamel hypoplasia associated with a lower frequency of wear. The Upper Formative period (800 A.D. -16th century A.D.) is characterized by the development of polyculture, slash-and-burn agriculture, as an adaptive agricultural strategy to the ecological damage generated by the intensive economic activity of complex societies. This process leads to a more varied diet, which generates better dental health.Keywords: dental pathologies, nutritional diet, equatorial pacific coast, dental anthropology
Procedia PDF Downloads 491787 The Genetic Architecture Underlying Dilated Cardiomyopathy in Singaporeans
Authors: Feng Ji Mervin Goh, Edmund Chee Jian Pua, Stuart Alexander Cook
Abstract:
Dilated cardiomyopathy (DCM) is a common cause of heart failure. Genetic mutations account for 50% of DCM cases with TTN mutations being the most common, accounting for up to 25% of DCM cases. However, the genetic architecture underlying Asian DCM patients is unknown. We evaluated 68 patients (female= 17) with DCM who underwent follow-up at the National Heart Centre, Singapore from 2013 through 2014. Clinical data were obtained and analyzed retrospectively. Genomic DNA was subjected to next-generation targeted sequencing. Nextera Rapid Capture Enrichment was used to capture the exons of a panel of 169 cardiac genes. DNA libraries were sequenced as paired-end 150-bp reads on Illumina MiSeq. Raw sequence reads were processed and analysed using standard bioinformatics techniques. The average age of onset of DCM was 46.1±10.21 years old. The average left ventricular ejection fraction (LVEF), left ventricular diastolic internal diameter (LVIDd), left ventricular systolic internal diameter (LVIDs) were 26.1±11.2%, 6.20±0.83cm, and 5.23±0.92cm respectively. The frequencies of mutations in major DCM-associated genes were as follows TTN (5.88% vs published frequency of 20%), LMNA (4.41% vs 6%), MYH7 (5.88% vs 4%), MYH6 (5.88% vs 4%), and SCN5a (4.41% vs 3%). The average callability at 10 times coverage of each major gene were: TTN (99.7%), LMNA (87.1%), MYH7 (94.8%), MYH6 (95.5%), and SCN5a (94.3%). In conclusion, TTN mutations are not common in Singaporean DCM patients. The frequencies of other major DCM-associated genes are comparable to frequencies published in the current literature.Keywords: heart failure, dilated cardiomyopathy, genetics, next-generation sequencing
Procedia PDF Downloads 2431786 Multi-Objective Optimization of Assembly Manufacturing Factory Setups
Authors: Andreas Lind, Aitor Iriondo Pascual, Dan Hogberg, Lars Hanson
Abstract:
Factory setup lifecycles are most often described and prepared in CAD environments; the preparation is based on experience and inputs from several cross-disciplinary processes. Early in the factory setup preparation, a so-called block layout is created. The intention is to describe a high-level view of the intended factory setup and to claim area reservations and allocations. Factory areas are then blocked, i.e., targeted to be used for specific intended resources and processes, later redefined with detailed factory setup layouts. Each detailed layout is based on the block layout and inputs from cross-disciplinary preparation processes, such as manufacturing sequence, productivity, workers’ workplace requirements, and resource setup preparation. However, this activity is often not carried out with all variables considered simultaneously, which might entail a risk of sub-optimizing the detailed layout based on manual decisions. Therefore, this work aims to realize a digital method for assembly manufacturing layout planning where productivity, area utilization, and ergonomics can be considered simultaneously in a cross-disciplinary manner. The purpose of the digital method is to support engineers in finding optimized designs of detailed layouts for assembly manufacturing factories, thereby facilitating better decisions regarding setups of future factories. Input datasets are company-specific descriptions of required dimensions for specific area reservations, such as defined dimensions of a worker’s workplace, material façades, aisles, and the sequence to realize the product assembly manufacturing process. To test and iteratively develop the digital method, a demonstrator has been developed with an adaptation of existing software that simulates and proposes optimized designs of detailed layouts. Since the method is to consider productivity, ergonomics, area utilization, and constraints from the automatically generated block layout, a multi-objective optimization approach is utilized. In the demonstrator, the input data are sent to the simulation software industrial path solutions (IPS). Based on the input and Lua scripts, the IPS software generates a block layout in compliance with the company’s defined dimensions of area reservations. Communication is then established between the IPS and the software EPP (Ergonomics in Productivity Platform), including intended resource descriptions, assembly manufacturing process, and manikin (digital human) resources. Using multi-objective optimization approaches, the EPP software then calculates layout proposals that are sent iteratively and simulated and rendered in IPS, following the rules and regulations defined in the block layout as well as productivity and ergonomics constraints and objectives. The software demonstrator is promising. The software can handle several parameters to optimize the detailed layout simultaneously and can put forward several proposals. It can optimize multiple parameters or weight the parameters to fine-tune the optimal result of the detailed layout. The intention of the demonstrator is to make the preparation between cross-disciplinary silos transparent and achieve a common preparation of the assembly manufacturing factory setup, thereby facilitating better decisions.Keywords: factory setup, multi-objective, optimization, simulation
Procedia PDF Downloads 1531785 The Influence of Absorptive Capacity on Process Innovation: An Exploratory Study in Seven Leading and Emerging Countries
Authors: Raphael M. Rettig, Tessa C. Flatten
Abstract:
This empirical study answer calls for research on Absorptive Capacity and Process Innovation. Due to the fourth industrial revolution, manufacturing companies face the biggest disruption of their production processes since the rise of advanced manufacturing technologies in the last century. Therefore, process innovation will become a critical task to master in the future for many manufacturing firms around the world. The general ability of organizations to acquire, assimilate, transform, and exploit external knowledge, known as Absorptive Capacity, was proven to positively influence product innovation and is already conceptually associated with process innovation. The presented research provides empirical evidence for this influence. The findings are based on an empirical analysis of 732 companies from seven leading and emerging countries: Brazil, China, France, Germany, India, Japan, and the United States of America. The answers to the survey were collected in February and March 2018 and addressed senior- and top-level management with a focus on operations departments. The statistical analysis reveals the positive influence of potential and Realized Absorptive Capacity on successful process innovation taking the implementation of new digital manufacturing processes as an example. Potential Absorptive Capacity covering the acquisition and assimilation capabilities of an organization showed a significant positive influence (β = .304, p < .05) on digital manufacturing implementation success and therefore on process innovation. Realized Absorptive Capacity proved to have significant positive influence on process innovation as well (β = .461, p < .01). The presented study builds on prior conceptual work in the field of Absorptive Capacity and process innovation and contributes theoretically to ongoing research in two dimensions. First, the already conceptually associated influence of Absorptive Capacity on process innovation is backed by empirical evidence in a broad international context. Second, since Absorptive Capacity was measured with a focus on new product development, prior empirical research on Absorptive Capacity was tailored to the research and development departments of organizations. The results of this study highlight the importance of Absorptive Capacity as a capability in mechanical engineering and operations departments of organizations. The findings give managers an indication of the importance of implementing new innovative processes into their production system and fostering the right mindset of employees to identify new external knowledge. Through the ability to transform and exploit external knowledge, own production processes can be innovated successfully and therefore have a positive influence on firm performance and the competitive position of their organizations.Keywords: absorptive capacity, digital manufacturing, dynamic capabilities, process innovation
Procedia PDF Downloads 1441784 Progress of Legislation in Post-Colonial, Post-Communist and Socialist Countries for the Intellectual Property Protection of the Autonomous Output of Artificial Intelligence
Authors: Ammar Younas
Abstract:
This paper is an attempt to explore the legal progression in procedural laws related to “intellectual property protection for the autonomous output of artificial intelligence” in Post-Colonial, Post-Communist and Socialist Countries. An in-depth study of legal progression in Pakistan (Common Law), Uzbekistan (Post-Soviet Civil Law) and China (Socialist Law) has been conducted. A holistic attempt has been made to explore that how the ideological context of the legal systems can impact, not only on substantive components but on the procedural components of the formal laws related to IP Protection of autonomous output of Artificial Intelligence. Moreover, we have tried to shed a light on the prospective IP laws and AI Policy in the countries, which are planning to incorporate the concept of “Digital Personality” in their legal systems. This paper will also address the question: “How far IP of autonomous output of AI can be protected with the introduction of “Non-Human Legal Personality” in legislation?” By using the examples of China, Pakistan and Uzbekistan, a case has been built to highlight the legal progression in General Provisions of Civil Law, Artificial Intelligence Policy of the country and Intellectual Property laws. We have used a range of multi-disciplinary concepts and examined them on the bases of three criteria: accuracy of legal/philosophical presumption, applying to the real time situations and testing on rational falsification tests. It has been observed that the procedural laws are designed in a way that they can be seen correlating with the ideological contexts of these countries.Keywords: intellectual property, artificial intelligence, digital personality, legal progression
Procedia PDF Downloads 1191783 Digital Health During a Pandemic: Critical Analysis of the COVID-19 Contact Tracing Apps
Authors: Mohanad Elemary, Imose Itua, Rajeswari B. Matam
Abstract:
Virologists and public health experts have been predicting potential pandemics from coronaviruses for decades. The viruses which caused the SARS and MERS pandemics and the Nipah virus led to many lost lives, but still, the COVID-19 pandemic caused by the SARS-CoV2 virus surprised many scientific communities, experts, and governments with its ease of transmission and its pathogenicity. Governments of various countries reacted by locking down entire populations to their homes to combat the devastation caused by the virus, which led to a loss of livelihood and economic hardship to many individuals and organizations. To revive national economies and support their citizens in resuming their lives, governments focused on the development and use of contact tracing apps as a digital way to track and trace exposure. Google and Apple introduced the Exposure Notification Systems (ENS) framework. Independent organizations and countries also developed different frameworks for contact tracing apps. The efficiency, popularity, and adoption rate of these various apps have been different across countries. In this paper, we present a critical analysis of the different contact tracing apps with respect to their efficiency, adoption rate and general perception, and the governmental strategies and policies, which led to the development of the applications. When it comes to the European countries, each of them followed an individualistic approach to the same problem resulting in different realizations of a similarly functioning application with differing results of use and acceptance. The study conducted an extensive review of existing literature, policies, and reports across multiple disciplines, from which a framework was developed and then validated through interviews with six key stakeholders in the field, including founders and executives in digital health startups and corporates as well as experts from international organizations like The World Health Organization. A framework of best practices and tactics is the result of this research. The framework looks at three main questions regarding the contact tracing apps; how to develop them, how to deploy them, and how to regulate them. The findings are based on the best practices applied by governments across multiple countries, the mistakes they made, and the best practices applied in similar situations in the business world. The findings include multiple strategies when it comes to the development milestone regarding establishing frameworks for cooperation with the private sector and how to design the features and user experience of the app for a transparent, effective, and rapidly adaptable app. For the deployment section, several tactics were discussed regarding communication messages, marketing campaigns, persuasive psychology, and the initial deployment scale strategies. The paper also discusses the data privacy dilemma and how to build for a more sustainable system of health-related data processing and utilization. This is done through principles-based regulations specific for health data to allow for its avail for the public good. This framework offers insights into strategies and tactics that could be implemented as protocols for future public health crises and emergencies whether global or regional.Keywords: contact tracing apps, COVID-19, digital health applications, exposure notification system
Procedia PDF Downloads 1391782 The Technophobia among Older Adults in China
Authors: Erhong Sun, Xuchun Ye
Abstract:
Technophobia, namely the fear or dislike of modern advanced technologies, plays a central role in age-related digital divides and is considered a new risk factor for older adults, which can affect the daily lives of people through low adherence to digital living. Indeed, there is considerable heterogeneity in the group of older adults who feel technophobia. Therefore, the aim of this study was to identify different technophobia typologies of older people and to examine their associations with the subjective age factor. A sample of 704 retired elderly over the age of 55 was recruited in China. Technophobia and subjective age were assessed with a questionnaire, respectively. Latent profile analysis was used to identify technophobia subgroups, using three dimensions including techno-anxiety, techno-paranoia, and privacy concerns as indicators. The association between the identified technophobia subgroups and subjective age was explored. In summary, four different technophobia typologies were identified among older adults in China. Combined with an investigation of personal background characteristics and subjective age, it draws a more nuanced image of the technophobia phenome among older adults in China. First, not all older adults suffer from technophobia, with about half of the elderly subjects belonging to the profiles of “Low-technophobia” and “Medium-technophobia.” Second, privacy concern plays an important role in the classification of technophobia among older adults. Third, subjective age might be a protective factor for technophobia in older adults. Although the causal direction between identified technophobia typologies and subjective age remains uncertain, our suggests that future interventions should better focus on subjective age by breaking the age stereotype of technology to reduce the negative effect of technophobia on older. Future development of this research will involve extensive investigation of the detailed impact of technophobia in senior populations, measurement of the negative outcomes, as well as formulation of innovative educational and clinical pathways.Keywords: technophobia, older adults, latent profile analysis, subjective age
Procedia PDF Downloads 751781 A Study on the Use Intention of Smart Phone
Authors: Zhi-Zhong Chen, Jun-Hao Lu, Jr., Shih-Ying Chueh
Abstract:
Based on Unified Theory of Acceptance and Use of Technology (UTAUT), the study investigates people’s intention on using smart phones. The study additionally incorporates two new variables: 'self-efficacy' and 'attitude toward using'. Samples are collected by questionnaire survey, in which 240 are valid. After Correlation Analysis, Reliability Test, ANOVA, t-test and Multiple Regression Analysis, the study finds that social impact and self-efficacy have positive effect on use intentions, and the use intentions also have positive effect on use behavior.Keywords: [1] Ajzen & Fishbein (1975), “Belief, attitude, intention and behavior: An introduction to theory and research”, Reading MA: Addison-Wesley. [2] Bandura (1977) Self-efficacy: toward a unifying theory of behavioural change. Psychological Review , 84, 191–215. [3] Bandura( 1986) A. Bandura, Social foundations of though and action, Prentice-Hall. Englewood Cliffs. [4] Ching-Hui Huang (2005). The effect of Regular Exercise on Elderly Optimism: The Self-efficacy and Theory of Reasoned Action Perspectives.(Master's dissertation, National Taiwan Sport University, 2005).National Digital Library of Theses and Dissertations in Taiwan。 [5] Chun-Mo Wu (2007).The Effects of Perceived Risk and Service Quality on Purchase Intention - an Example of Taipei City Long-Term Care Facilities. (Master's dissertation, Ming Chuan University, 2007).National Digital Library of Theses and Dissertations in Taiwan. [6] Compeau, D.R., and Higgins, C.A., (1995) “Application of social cognitive theory to training for computer skills.”, Information Systems Research, 6(2), pp.118-143. [7] computer-self-efficacy and mediators of the efficacy-performance relationship. International Journal of Human-Computer Studies, 62, 737-758. [8] Davis et al(1989), “User acceptance of computer technology: A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [9] Davis et al(1989), “User acceptance of computer technology:A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [10] Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340。 [11] Davis. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319–340. doi:10.2307/249008 [12] Johnson, R. D. (2005). An empirical investigation of sources of application-specific [13] Mei-yin Hsu (2010).The Study on Attitude and Satisfaction of Electronic Documents System for Administrators of Elementary Schools in Changhua County.(Master's dissertation , Feng Chia University, 2010).National Digital Library of Theses and Dissertations in Taiwan. [14] Ming-Chun Hsieh (2010). Research on Parents’ Attitudes Toward Electronic Toys: The case of Taichung City.(Master's dissertation, Chaoyang University of Technology,2010).National Digital Library of Theses and Dissertations in Taiwan. [15] Moon and Kim(2001). Extending the TAM for a World-Wide-Web context, Information and Management, v.38 n.4, p.217-230. [16] Shang-Yi Hu (2010).The Impacts of Knowledge Management on Customer Relationship Management – Enterprise Characteristicsand Corporate Governance as a Moderator.(Master's dissertation, Leader University, 2010)。National Digital Library of Theses and Dissertations in Taiwan. [17] Sheng-Yi Hung (2013, September10).Worldwide sale of smartphones to hit one billion IDC:Android dominate the market. ETtoday. Retrieved data form the available protocol:2013/10/3. [18] Thompson, R.L., Higgins, C.A., and Howell, J.M.(1991), “Personal Computing: Toward a Conceptual Model of Utilization”, MIS Quarterly(15:1), pp. 125-143. [19] Venkatesh, V., M.G. Morris, G.B. Davis, and F. D. Davis (2003), “User acceptance of information technology: Toward a unified view, ” MIS Quarterly, 27, No. 3, pp.425-478. [20] Vijayasarathy, L. R. (2004), Predicting Consumer Intentions to Use On-Line Shopping: The Case for an Augmented Technology Acceptance Model, Information and Management, Vol.41, No.6, pp.747-762. [21] Wikipedia - smartphone (http://zh.wikipedia.org/zh-tw/%E6%99%BA%E8%83%BD%E6%89%8B%E6%9C%BA)。 [22] Wu-Minsan (2008).The impacts of self-efficacy, social support on work adjustment with hearing impaired. (Master's dissertation, Southern Taiwan University of Science and Technology, 2008).National Digital Library of Theses and Dissertations in Taiwan. [23] Yu-min Lin (2006). The Influence of Business Employee’s MSN Self-efficacy On Instant Messaging Usage Behavior and Communicaiton Satisfaction.(Master's dissertation, National Taiwan University of Science and Technology, 2006).National Digital Library of Theses and Dissertations in Taiwan.
Procedia PDF Downloads 4111780 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 2281779 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy
Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa
Abstract:
One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment
Procedia PDF Downloads 191