Search results for: Google model viewer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16731

Search results for: Google model viewer

16671 Enhanced Iceberg Information Dissemination for Public and Autonomous Maritime Use

Authors: Ronald Mraz, Gary C. Kessler, Ethan Gold, John G. Cline

Abstract:

The International Ice Patrol (IIP) continually monitors iceberg activity in the North Atlantic by direct observation using ships, aircraft, and satellite imagery. Daily reports detailing navigational boundaries of icebergs have significantly reduced the risk of iceberg contact. What is currently lacking is formatting this data for automatic transmission and display of iceberg navigational boundaries in commercial navigation equipment. This paper describes the methodology and implementation of a system to format iceberg limit information for dissemination through existing radio network communications. This information will then automatically display on commercial navigation equipment. Additionally, this information is reformatted for Google Earth rendering of iceberg track line limits. Having iceberg limit information automatically available in standard navigation equipment will help support full autonomous operation of sailing vessels.

Keywords: iceberg, iceberg risk, iceberg track lines, AIS messaging, international ice patrol, North American ice service, google earth, autonomous surface vessels

Procedia PDF Downloads 110
16670 Conducting Computational Physics Laboratory Course Using Cloud Storage Space

Authors: Ajay Wadhwa

Abstract:

A Laboratory course on computational physics is different from the conventional lab course on other topics of physics like Mechanics, Heat, Optics, etc. because it involves active participation of the teacher as well as one-to-one interaction between teacher and the student. The course content requires the teacher to teach programming language as well as numerical methods along with their applications in physics. The task becomes more daunting when about 90% of the students in the class have no previous experience of any programming language. In the presented work, we have described a methodology for conducting the computational physics course by using the Google Drive and Dropitto.me cloud storage services. We have evaluated the performance in a class of sixty students by dividing them equally into four groups. One of the groups was made the peer group on whom the presented methodology was tested. The other groups were taught by using conventional method of classroom lectures. In order to assess our methodology, we analyzed the performance of students in four class tests. A study of certain statistical parameters like the mean, standard deviation, and Z-test hypothesis revealed that the cyber methodology based on cloud storage is more efficient than the conventional method of teaching.

Keywords: computational Physics, Z-test hypothesis, cloud storage, Google drive

Procedia PDF Downloads 277
16669 Installing Cloud Computing Model for E-Businesses in Small Organizations

Authors: Khader Titi

Abstract:

Information technology developments have changed the way how businesses are working. Organizations are required to become visible online and stay connected to take advantages of costs reduction and improved operation of existing resources. The approval and the application areas of the cloud computing has significantly increased since it was presented by Google in 2007. Internet Cloud computing has attracted the IT enterprise attention especially the e-business enterprise. At this time, there is a great issue of environmental costs during the enterprises apply the e- business, but with the coming of cloud computing, most of the problem will be solved. Organizations around the world are facing with the continued budget challenges and increasing in the size of their computational data so, they need to find a way to deliver their services to clients as economically as possible without negotiating the achievement of anticipated outcomes. E- business companies need to provide better services to satisfy their clients. In this research, the researcher proposed a paradigm that use and deploy cloud computing technology environment to be used for e-business in small enterprises. Cloud computing might be a suitable model for implementing e-business and e-commerce architecture to improve efficiency and user satisfaction.

Keywords: E-commerce, cloud computing, B2C, SaaS

Procedia PDF Downloads 292
16668 Risk of Fatal and Non-Fatal Coronary Heart Disease and Stroke Events among Adult Patients with Hypertension: Basic Markov Model Inputs for Evaluating Cost-Effectiveness of Hypertension Treatment: Systematic Review of Cohort Studies

Authors: Mende Mensa Sorato, Majid Davari, Abbas Kebriaeezadeh, Nizal Sarrafzadegan, Tamiru Shibru, Behzad Fatemi

Abstract:

Markov model, like cardiovascular disease (CVD) policy model based simulation, is being used for evaluating the cost-effectiveness of hypertension treatment. Stroke, angina, myocardial infarction (MI), cardiac arrest, and all-cause mortality were included in this model. Hypertension is a risk factor for a number of vascular and cardiac complications and CVD outcomes. Objective: This systematic review was conducted to evaluate the comprehensiveness of this model across different regions globally. Methods: We searched articles written in the English language from PubMed/Medline, Ovid/Medline, Embase, Scopus, Web of Science, and Google scholar with a systematic search query. Results: Thirteen cohort studies involving a total of 2,165,770 (1,666,554 hypertensive adult population and 499,226 adults with treatment-resistant hypertension) were included in this scoping review. Hypertension is clearly associated with coronary heart disease (CHD) and stroke mortality, unstable angina, stable angina, MI, heart failure (HF), sudden cardiac death, transient ischemic attack, ischemic stroke, subarachnoid hemorrhage, intracranial hemorrhage, peripheral arterial disease (PAD), and abdominal aortic aneurism (AAA). Association between HF and hypertension is variable across regions. Treatment resistant hypertension is associated with a higher relative risk of developing major cardiovascular events and all-cause mortality when compared with non-resistant hypertension. However, it is not included in the previous CVD policy model. Conclusion: The CVD policy model used can be used in most regions for the evaluation of the cost-effectiveness of hypertension treatment. However, hypertension is highly associated with HF in Latin America, the Caribbean, Eastern Europe, and Sub-Saharan Africa. Therefore, it is important to consider HF in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment in these regions. We do not suggest the inclusion of PAD and AAA in the CVD policy model for evaluating the cost-effectiveness of hypertension treatment due to a lack of sufficient evidence. Researchers should consider the effect of treatment-resistant hypertension either by including it in the basic model or during setting the model assumptions.

Keywords: cardiovascular disease policy model, cost-effectiveness analysis, hypertension, systematic review, twelve major cardiovascular events

Procedia PDF Downloads 52
16667 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 139
16666 Spatio-Temporal Assessment of Urban Growth and Land Use Change in Islamabad Using Object-Based Classification Method

Authors: Rabia Shabbir, Sheikh Saeed Ahmad, Amna Butt

Abstract:

Rapid land use changes have taken place in Islamabad, the capital city of Pakistan, over the past decades due to accelerated urbanization and industrialization. In this study, land use changes in the metropolitan area of Islamabad was observed by the combined use of GIS and satellite remote sensing for a time period of 15 years. High-resolution Google Earth images were downloaded from 2000-2015, and object-based classification method was used for accurate classification using eCognition software. The information regarding urban settlements, industrial area, barren land, agricultural area, vegetation, water, and transportation infrastructure was extracted. The results showed that the city experienced a spatial expansion, rapid urban growth, land use change and expanding transportation infrastructure. The study concluded the integration of GIS and remote sensing as an effective approach for analyzing the spatial pattern of urban growth and land use change.

Keywords: land use change, urban growth, Islamabad, object-based classification, Google Earth, remote sensing, GIS

Procedia PDF Downloads 130
16665 A Comparative and Doctrinal Analysis towards the Investigation of a Right to Be Forgotten in Hong Kong

Authors: Jojo Y. C. Mo

Abstract:

Memories are good. They remind us of people, places and experiences that we cherish. But memories cannot be changed and there may well be memories that we do not want to remember. This is particularly true in relation to information which causes us embarrassment and humiliation or simply because it is private – we all want to erase or delete such information. This desire to delete is recently recognised by the Court of Justice of the European Union in the 2014 case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González in which the court ordered Google to remove links to some information about the complainant which he wished to be removed. This so-called ‘right to be forgotten’ received serious attention and significantly, the European Council and the European Parliament enacted the General Data Protection Regulation (GDPR) to provide a more structured and normative framework for implementation of right to be forgotten across the EU. This development in data protection laws will, undoubtedly, have significant impact on companies and co-operations not just within the EU but outside as well. Hong Kong, being one of the world’s leading financial and commercial center as well as one of the first jurisdictions in Asia to implement a comprehensive piece of data protection legislation, is therefore a jurisdiction that is worth looking into. This article/project aims to investigate the following: a) whether there is a right to be forgotten under the existing Hong Kong data protection legislation b) if not, whether such a provision is necessary and why. This article utilises a comparative methodology based on a study of primary and secondary resources, including scholarly articles, government and law commission reports and working papers and relevant international treaties, constitutional documents, case law and legislation. The author will primarily engage literature and case-law review as well as comparative and doctrinal analyses. The completion of this article will provide privacy researchers with more concrete principles and data to conduct further research on privacy and data protection in Hong Kong and internationally and will provide a basis for policy makers in assessing the rationale and need for a right to be forgotten in Hong Kong.

Keywords: privacy, right to be forgotten, data protection, Hong Kong

Procedia PDF Downloads 161
16664 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era

Authors: Cagri Baris Kasap

Abstract:

In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.

Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking

Procedia PDF Downloads 115
16663 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 40
16662 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 64
16661 The Publication Impact of London’s Air Ambulance on the Field of Pre-Hospital Medicine and Its Application to Air Ambulances Internationally: A Bibliometric Analysis

Authors: Maria Ahmad, Alexandra Valetopoulou, Michael D. Christian

Abstract:

Background: London’s Air Ambulance (LAA) provides advanced pre-hospital trauma care across London, bringing specialist resources and expert trauma teams to patients. Since its inception 32 years ago, LAA has treated over 40,000 pre-hospital patients and significantly contributed to pre-hospital patient care in London. To the authors’ best knowledge, this is the first analysis to quantify the magnitude of the publication impact of LAA on the international field of pre-hospital medicine. Method: We searched the Scopus, Web of Science, Google Scholar and PubMed databases to identify LAA focused articles. These were defined as articles on the topic of pre-hospital medicine which either utilised data from LAA, or focused on LAA patients, or were authored by LAA clinicians. A bibliometric analysis was conducted and the impact of each eligible article was classified as either: high (article directly influenced the change or creation of clinical guidelines); medium (the article was referenced in clinical guidelines or had >20 Google Scholar citations or >10 PubMed citations); or low impact (article had <20 Google Scholar citations or <10 PubMed citations). Results: The literature search yielded 1,120 articles in total. 198 articles met our inclusion criteria, and their full text was analysed to determine the level of impact. 19 articles were classified as high-impact, 76 as medium-impact, and 103 as low-impact. 20 of the 76 medium-impact articles were referenced in clinical guidelines but had not prompted changes to the guidelines. Conclusion: To our knowledge, this review is the first to quantify the significant publication impact of LAA within the field of pre-hospital medicine over the last 32 years. LAA publications have focused on and driven clinical innovations in trauma care, particularly in pre-hospital anaesthesia, haemorrhage control, and major incidents, with many impacting national and international guidelines. We recommend a greater emphasis on multidisciplinary pre-hospital collaboration in publications in future research and quality improvement projects across all pre-hospital services.

Keywords: air ambulance, pre-hospital medicine, London’s Air Ambulance, London HEMS

Procedia PDF Downloads 56
16660 Ubiquitous Life People Informatics Engine (U-Life PIE): Wearable Health Promotion System

Authors: Yi-Ping Lo, Shi-Yao Wei, Chih-Chun Ma

Abstract:

Since Google launched Google Glass in 2012, numbers of commercial wearable devices were released, such as smart belt, smart band, smart shoes, smart clothes ... etc. However, most of these devices perform as sensors to show the readings of measurements and few of them provide the interactive feedback to the user. Furthermore, these devices are single task devices which are not able to communicate with each other. In this paper a new health promotion system, Ubiquitous Life People Informatics Engine (U-Life PIE), will be presented. This engine consists of People Informatics Engine (PIE) and the interactive user interface. PIE collects all the data from the compatible devices, analyzes this data comprehensively and communicates between devices via various application programming interfaces. All the data and informations are stored on the PIE unit, therefore, the user is able to view the instant and historical data on their mobile devices any time. It also provides the real-time hands-free feedback and instructions through the user interface visually, acoustically and tactilely. These feedback and instructions suggest the user to adjust their posture or habits in order to avoid the physical injuries and prevent illness.

Keywords: machine learning, wearable devices, user interface, user experience, internet of things

Procedia PDF Downloads 262
16659 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 608
16658 Pedagogy of Possibility: Exploring the TVET of Southern African Workers on Foreign Vessels Mediated by Ubiquitous Google and Microsoft apps

Authors: Robin Ferguson

Abstract:

The context which this paper explores is the provision of Technical Vocational Education and Training (TVET) of southern African workers at sea on local and foreign vessels using a blended learning approach. The pedagogical challenge of providing quality education in this context is that multiple African and foreign languages and cultural norms are found amongst the all-male crew; and there are widely differing levels of education, low levels of digital literacy and limited connectivity. The methodology used is a nested case study. The study describes the mechanisms used to provide ongoing, real-time workplace TVET on two foreign vessels. Some training was done in person when the vessels came into port, however, the majority of the TVET was achieved from shore to ship using a combination of commonly available Google and Microsoft Apps and WhatsApp. Voice, video and text in multiple languages were used to accommodate different learning styles. The learning was supported by the development of learning networks using social media. This paper also reflects on the shore-based organisational change processes required to support sea learning. The conceptual framework used is the Theory of Practice Architectures (TPA) as is provides a site-ontological perspective of the sayings/thinkings, doings and relatings of this workplace training which is multiplanar as it plays out at sea and ashore, in-person and on-line. Using TPA, the overarching practice architectures and supporting structures which confound or enable these learning practices are revealed. The contribution which this paper makes is an insight into an innovative vocational pedagogy which promotes ICT-mediated learning amongst workers who suffer from low levels of literacies and limited ICT-access and who work and live in remote places. It is a pedagogy of possibility which crosses the digital divide.

Keywords: theory of practice architecture, microsoft, google, whatsapp, vocational pedagogy, mariners, distributed workplaces

Procedia PDF Downloads 43
16657 Integrative Biology Teaching and Learning Model Based on STEM Education

Authors: Narupot Putwattana

Abstract:

Changes in global situation such as environmental and economic crisis brought the new perspective for science education called integrative biology. STEM has been increasingly mentioned for several educational researches as the approach which combines the concept in Science (S), Technology (T), Engineering (E) and Mathematics (M) to apply in teaching and learning process so as to strengthen the 21st-century skills such as creativity and critical thinking. Recent studies demonstrated STEM as the pedagogy which described the engineering process along with the science classroom activities. So far, pedagogical contents for STEM explaining the content in biology have been scarce. A qualitative literature review was conducted so as to gather the articles based on electronic databases (google scholar). STEM education, engineering design, teaching and learning of biology were used as main keywords to find out researches involving with the application of STEM in biology teaching and learning process. All articles were analyzed to obtain appropriate teaching and learning model that unify the core concept of biology. The synthesized model comprised of engineering design, inquiry-based learning, biological prototype and biologically-inspired design (BID). STEM content and context integration were used as the theoretical framework to create the integrative biology instructional model for STEM education. Several disciplines contents such as biology, engineering, and technology were regarded for inquiry-based learning to build biological prototype. Direct and indirect integrations were used to provide the knowledge into the biology related STEM strategy. Meanwhile, engineering design and BID showed the occupational context for engineer and biologist. Technological and mathematical aspects were required to be inspected in terms of co-teaching method. Lastly, other variables such as critical thinking and problem-solving skills should be more considered in the further researches.

Keywords: biomimicry, engineering approach, STEM education, teaching and learning model

Procedia PDF Downloads 219
16656 Strategic Evaluation of Existing Drainage System in Apalit, Pampanga

Authors: Jennifer de Jesus, Ares Baron Talusan, Steven Valerio

Abstract:

This paper aims to conduct an evaluation of the drainage system in a specific village in Apalit, Pampanga using the geographic information system to easily identify inadequate drainage lines that needs rehabilitation to aid in flooding problem in the area. The researchers will be utilizing two methods and software to be able to strategically assess each drainage line in the village– the two methods were the rational method and the Manning's Formula for Open Channel Flow and compared it to each other, and the software to be used was Google Earth Pro by 2020 Google LLC. The results must satisfy the statement QManning > QRational to be able to see if the specific line and section is adequate; otherwise, it is inadequate; dimensions needed to be recomputed until it became adequate. The use of the software is the visualization of data collected from the computations to clearly see in which areas the drainage lines were adequate or not. The researchers were then able to conclude that the drainage system should be considered inadequate, seeing as most of the lines are unable to accommodate certain intensities of rainfall. The researchers have also concluded that line rehabilitation is a must to proceed.

Keywords: strategic evaluation, drainage system, as-built plans, inadequacy, rainfall intensity-duration-frequency data, rational method, manning’s equation for open channel flow

Procedia PDF Downloads 101
16655 Artificial Intelligent Tax Simulator to Minimize Tax Liability for Multinational Corporations

Authors: Sean Goltz, Michael Mayo

Abstract:

The purpose of this research is to use Global-Regulation.com database of the world laws, focusing on tax treaties between countries, in order to create an AI-driven tax simulator that will run an AI agent through potential tax scenarios across countries. The AI agent goal is to identify the scenario that will result in minimum tax liability based on tax treaties between countries. The results will be visualized by a three dimensional matrix. This will be an online web application. Multinational corporations are running their business through multiple countries. These countries, in turn, have a tax treaty with many other countries to regulate the payment of taxes on income that is transferred between these countries. As a result, planning the best tax scenario across multiple countries and numerous tax treaties is almost impossible. This research propose to use Global-Regulation.com database of word laws in English (machine translated by Google and Microsoft API’s) in order to create a simulator that will include the information in the tax treaties. Once ready, an AI agent will be sent through the simulator to identify the scenario that will result in minimum tax liability. Identifying the best tax scenario across countries may save multinational corporations, like Google, billions of dollars annually. Given the nature of the raw data and the domain of taxes (i.e., numbers), this is a promising ground to employ artificial intelligence towards a practical and beneficial purpose.

Keywords: taxation, law, multinational, corporation

Procedia PDF Downloads 167
16654 A Script for Presentation to the Management of a Teaching Hospital on DXplain Clinical Decision Support System

Authors: Jacob Nortey

Abstract:

Introduction: In recent years, there has been an enormous success in discoveries of scientific knowledge in medicine coupled with the advancement of technology. Despite all these successes, diagnoses and treatment of diseases have become complex. According to the Ibero – American Study of Adverse Effects (IBEAS), about 10% of hospital patients suffer from secondary damage during the care process, and approximately 2% die from this process. Many clinical decision support systems have been developed to help mitigate some healthcare medical errors. Method: Relevant databases were searched, including ones that were peculiar to the clinical decision support system (that is, using google scholar, Pub Med and general google searches). The articles were then screened for a comprehensive overview of the functionality, consultative style and statistical usage of Dxplain Clinical decision support systems. Results: Inferences drawn from the articles showed high usage of Dxplain clinical decision support system for problem-based learning among students in developed countries as against little or no usage among students in Low – and Middle – income Countries. The results also indicated high usage among general practitioners. Conclusion: Despite the challenges Dxplain presents, the benefits of its usage to clinicians and students are enormous.

Keywords: dxplain, clinical decision support sytem, diagnosis, support systems

Procedia PDF Downloads 55
16653 An Exploratory Study of the Student’s Learning Experience by Applying Different Tools for e-Learning and e-Teaching

Authors: Angel Daniel Muñoz Guzmán

Abstract:

E-learning is becoming more and more common every day. For online, hybrid or traditional face-to-face programs, there are some e-teaching platforms like Google classroom, Blackboard, Moodle and Canvas, and there are platforms for full e-learning like Coursera, edX or Udemy. These tools are changing the way students acquire knowledge at schools; however, in today’s changing world that is not enough. As students’ needs and skills change and become more complex, new tools will need to be added to keep them engaged and potentialize their learning. This is especially important in the current global situation that is changing everything: the Covid-19 pandemic. Due to Covid-19, education had to make an unexpected switch from face-to-face courses to digital courses. In this study, the students’ learning experience is analyzed by applying different e-tools and following the Tec21 Model and a flexible and digital model, both developed by the Tecnologico de Monterrey University. The evaluation of the students’ learning experience has been made by the quantitative PrEmo method of emotions. Findings suggest that the quantity of e-tools used during a course does not affect the students’ learning experience as much as how a teacher links every available tool and makes them work as one in order to keep the student engaged and motivated.

Keywords: student, experience, e-learning, e-teaching, e-tools, technology, education

Procedia PDF Downloads 82
16652 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 91
16651 Real-Time Fitness Monitoring with MediaPipe

Authors: Chandra Prayaga, Lakshmi Prayaga, Aaron Wade, Kyle Rank, Gopi Shankar Mallu, Sri Satya, Harsha Pola

Abstract:

In today's tech-driven world, where connectivity shapes our daily lives, maintaining physical and emotional health is crucial. Athletic trainers play a vital role in optimizing athletes' performance and preventing injuries. However, a shortage of trainers impacts the quality of care. This study introduces a vision-based exercise monitoring system leveraging Google's MediaPipe library for precise tracking of bicep curl exercises and simultaneous posture monitoring. We propose a three-stage methodology: landmark detection, side detection, and angle computation. Our system calculates angles at the elbow, wrist, neck, and torso to assess exercise form. Experimental results demonstrate the system's effectiveness in distinguishing between good and partial repetitions and evaluating body posture during exercises, providing real-time feedback for precise fitness monitoring.

Keywords: physical health, athletic trainers, fitness monitoring, technology driven solutions, Google’s MediaPipe, landmark detection, angle computation, real-time feedback

Procedia PDF Downloads 35
16650 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis

Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar

Abstract:

Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.

Keywords: NLP, multilingual, sentiment analysis, texts

Procedia PDF Downloads 59
16649 Reactive X Proactive Searches on Internet After Leprosy Institutional Campaigns in Brazil: A Google Trends Analysis

Authors: Paulo Roberto Vasconcellos-Silva

Abstract:

The "Janeiro Roxo" (Purple January) campaign in Brazil aims to promote awareness of leprosy and its early symptoms. The COVID-19 pandemic has adversely affected institutional campaigns, mostly considering leprosy a neglected disease by the media. Google Trends (GT) is a tool that tracks user searches on Google, providing insights into the popularity of specific search terms. Our prior research has categorized online searches into two types: "Reactive searches," driven by transient campaign-related stimuli, and "Proactive searches," driven by personal interest in early symptoms and self-diagnosis. Using GT we studied: (i) the impact of "Janeiro Roxo" on public interest in leprosy (assessed through reactive searches) and its early symptoms (evaluated through proactive searches) over the past five years; (ii) changes in public interest during and after the COVID-19 pandemic; (iii) patterns in the dynamics of reactive and proactive searches Methods: We used GT's "Relative Search Volume" (RSV) to gauge public interest on a scale from 0 to 100. "HANSENÍASE" (HAN) was a proxy for reactive searches, and "HANSENÍASE SINTOMAS" (leprosy symptoms) (H.SIN) for proactive searches (interest in leprosy or in self-diagnosis). We analyzed 261 weeks of data from 2018 to 2023, using polynomial trend lines to model trends over this period. Analysis of Variance (ANOVA) was used to compare weekly RSV, monthly (MM) and annual means (AM). Results: Over a span of 261 weeks, there was consistently higher Relative Search Volume (RSV) for HAN compared to H.SIN. Both search terms exhibited their highest (MM) in January months during all periods. COVID-19 pandemic: a decline was observed during the pandemic years (2020-2021). There was a 24% decrease in RSV for HAN and a 32.5% decrease for H.SIN. Both HAN and H.SIN regained their pre-pandemic search levels in January 2022-2023. Breakpoints indicated abrupt changes - in the 26th week (February 2019), 55th and 213th weeks (September 2019 and 2022) related to September regional campaigns (interrupted in 2020-2021). Trend lines for HAN exhibited an upward curve between 33rd-45th week (April to June 2019), a pandemic-related downward trend between 120th-136th week (December 2020 to March 2021), and an upward trend between 220th-240th week (November 2022 to March 2023). Conclusion: The "Janeiro Roxo" campaign, along with other media-driven activities, exerts a notable influence on both reactive and proactive searches related to leprosy topics. Reactive searches, driven by campaign stimuli, significantly outnumber proactive searches. Despite the interruption of the campaign due to the pandemic, there was a subsequent resurgence in both types of searches. The recovery observed in reactive and proactive searches post-campaign interruption underscores the effectiveness of such initiatives, particularly at the national level. This suggests that regional campaigns aimed at leprosy awareness can be considered highly successful in stimulating proactive public engagement. The evaluation of internet-based campaign programs proves valuable not only for assessing their impact but also for identifying the needs of vulnerable regions. These programs can play a crucial role in integrating regions and highlighting their needs for assistance services in the context of leprosy awareness.

Keywords: health communication, leprosy, health campaigns, information seeking behavior, Google Trends, reactive searches, proactive searches, leprosy early identification

Procedia PDF Downloads 33
16648 Meta-analysis of Technology Acceptance for Mobile and Digital Libraries in Academic Settings

Authors: Nosheen Fatima Warraich

Abstract:

One of the most often used models in information system (IS) research is the technology acceptance model (TAM). This meta-analysis aims to measure the relationship between TAM variables, Perceived Ease of Use (PEOU), and Perceived Usefulness (PU) with users’ attitudes and behavioral intention (BI) in mobile and digital libraries context. It also examines the relationship of external variables (information quality and system quality) with TAM variables (PEOU and PU) in digital libraries settings. This meta-analysis was performed through PRISMA-P guidelines. Four databases (Google Scholar, Web of Science, Scopus, and LISTA) were utilized for searching, and the search was conducted according to defined criteria. The findings of this study revealed a large effect size of PU and PEOU with BI. There was also a large effect size of PU and PEOU with attitude. A medium effect size was found between SysQ -> PU, InfoQ-> PU, and SysQ -> PEOU. However, there was a small effect size between InfoQ and PEOU. It fills the literature gap and also confirms that TAM is a valid model for the acceptance and use of technology in mobile and digital libraries context. Thus, its findings would be helpful for developers and designers in designing and developing mobile library apps. It will also be beneficial for library authorities and system librarians in designing and developing digital libraries in academic settings.

Keywords: technology acceptance model (tam), perceived ease of use, perceived usefulness, information quality, system quality, meta-analysis, systematic review, digital libraries, and mobile library apps.

Procedia PDF Downloads 47
16647 Hope in the Ruins of 'Ozymandias': Reimagining Temporal Horizons in Felicia Hemans 'the Image in Lava'

Authors: Lauren Schuldt Wilson

Abstract:

Felicia Hemans’ memorializing of the unwritten lives of women and the consequent allowance for marginalized voices to remember and be remembered has been considered by many critics in terms of ekphrasis and elegy, terms which privilege the question of whether Hemans’ poeticizing can represent lost voices of history or only her poetic expression. Amy Gates, Brian Elliott, and others point out Hemans’ acknowledgement of the self-projection necessary for imaginatively filling the absences of unrecorded histories. Yet, few have examined the complex temporal positioning Hemans inscribes in these moments of self-projection and imaginative historicizing. In poems like ‘The Image in Lava,’ Hemans maps not only a lost past, but also a lost potential future onto the image of a dead infant in its mother’s arms, the discovery and consideration of which moves the imagined viewer to recover and incorporate the ‘hope’ encapsulated in the figure of the infant into a reevaluation of national time embodied by the ‘relics / Left by the pomps of old.’ By examining Hemans’ acknowledgement and response to Percy Bysshe Shelley’s ‘Ozymandias,’ this essay explores how Hemans’ depictions of imaginative historicizing open new horizons of possibility and reevaluate temporal value structures by imagining previously undiscovered or unexplored potentialities of the past. Where Shelley’s poem mocks the futility of national power and time, this essay outlines Hemans’ suggestion of alternative threads of identity and temporal meaning-making which, regardless of historical veracity, exist outside of and against the structures Shelley challenges. Counter to previous readings of Hemans’ poem as celebration of either recovered or poetically constructed maternal love, this essay argues that Hemans offers a meditation on sites of reproduction—both of personal reproductive futurity and of national reproduction of power. This meditation culminates in Hemans’ gesturing towards a method of historicism by which the imagined viewer reinvigorates the sterile, ‘shattered visage’ of national time by forming temporal identity through the imagining of trans-historical hope inscribed on the infant body of the universal, individual subject rather than the broken monument of the king.

Keywords: futurity, national temporalities, reproduction, revisionary histories

Procedia PDF Downloads 140
16646 An Assessment of the Digital Transformation of Radio

Authors: Fatih Sogut

Abstract:

Developments in information technologies have caused significant changes in terms of radio and television broadcasting. With these changes in terms of production format, transmission techniques and service delivery, the distinction between traditional media and New Media has emerged. The viewer/listener, who was in a passive position before, is now in an active position and has a say in many matters, including content production. Visual and auditory data transfer has diversified and become easier thanks to the convergence phenomenon. These transformations and developments also affected one of the oldest electronic communication tools, radio. In this study, in order to adapt to the new era that emerged with the digital age, the change in radio broadcasting and the factors that led to this change were tried to be explained.

Keywords: Internet, radio broadcasting, digital transformation, Internet broadcasting

Procedia PDF Downloads 147
16645 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems

Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos

Abstract:

As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.

Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model

Procedia PDF Downloads 131
16644 Mathematical Model to Quantify the Phenomenon of Democracy

Authors: Mechlouch Ridha Fethi

Abstract:

This paper presents a recent mathematical model in political sciences concerning democracy. The model is represented by a logarithmic equation linking the Relative Index of Democracy (RID) to Participation Ratio (PR). Firstly the meanings of the different parameters of the model were presented; and the variation curve of the RID according to PR with different critical areas was discussed. Secondly, the model was applied to a virtual group where we show that the model can be applied depending on the gender. Thirdly, it was observed that the model can be extended to different language models of democracy and that little use to assess the state of democracy for some International organizations like UNO.

Keywords: democracy, mathematic, modelization, quantification

Procedia PDF Downloads 332
16643 Assessing of Social Comfort of the Russian Population with Big Data

Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro

Abstract:

The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.

Keywords: big data, Google trends, integral indicator, social comfort

Procedia PDF Downloads 173
16642 The Achievement Model of University Social Responsibility

Authors: Le Kang

Abstract:

On the research question of 'how to achieve USR', this contribution reflects the concept of university social responsibility, identify three achievement models of USR as the society - diversified model, the university-cooperation model, the government - compound model, also conduct a case study to explore characteristics of Chinese achievement model of USR. The contribution concludes with discussion of how the university, government and society balance demands and roles, make necessarily strategic adjustment and innovative approach to repair the shortcomings of each achievement model.

Keywords: modern university, USR, achievement model, compound model

Procedia PDF Downloads 725