Search results for: user
221 Evaluating Value of Users' Personal Information Based on Cost-Benefit Analysis
Authors: Jae Hyun Park, Sangmi Chai, Minkyun Kim
Abstract:
As users spend more time on the Internet, the probability of their personal information being exposed has been growing. This research has a main purpose of investigating factors and examining relationships when Internet users recognize their value of private information with a perspective of an economic asset. The study is targeted on Internet users, and the value of their private information will be converted into economic figures. Moreover, how economic value changes in relation with individual attributes, dealer’s traits, circumstantial properties will be studied. In this research, the changes in factors on private information value responding to different situations will be analyzed in an economic perspective. Additionally, this study examines the associations between users’ perceived risk and value of their personal information. By using the cost-benefit analysis framework, the hypothesis that the user’s sense in private information value can be influenced by individual attributes and situational properties will be tested. Therefore, this research will attempt to provide answers for three research objectives. First, this research will identify factors that affect value recognition of users’ personal information. Second, it provides evidences that there are differences on information system users’ economic value of information responding to personal, trade opponent, and situational attributes. Third, it investigates the impact of those attributes on individuals’ perceived risk. Based on the assumption that personal, trade opponent and situation attributes make an impact on the users’ value recognition on private information, this research will present the understandings on the different impacts of those attributes in recognizing the value of information with the economic perspective and prove the associative relationships between perceived risk and decision on the value of users’ personal information. In order to validate our research model, this research used the regression methodology. Our research results support that information breach experience and information security systems is associated with users’ perceived risk. Information control and uncertainty are also related to users’ perceived risk. Therefore, users’ perceived risk is considered as a significant factor on evaluating the value of personal information. It can be differentiated by trade opponent and situational attributes. This research presents new perspective on evaluating the value of users’ personal information in the context of perceived risk, personal, trade opponent and situational attributes. It fills the gap in the literature by providing how users’ perceived risk are associated with personal, trade opponent and situation attitudes in conducting business transactions with providing personal information. It adds to previous literature that the relationship exists between perceived risk and the value of users’ private information in the economic perspective. It also provides meaningful insights to the managers that in order to minimize the cost of information breach, managers need to recognize the value of individuals’ personal information and decide the proper amount of investments on protecting users’ online information privacy.Keywords: private information, value, users, perceived risk, online information privacy, attributes
Procedia PDF Downloads 239220 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal
Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader
Abstract:
DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform
Procedia PDF Downloads 79219 A Comprehensive Analysis of Factors Leading to Fatal Road Accidents in France and Its Overseas Territories
Authors: Bouthayna Hayou, Mohamed Mouloud Haddak
Abstract:
In road accidents in French overseas territories have been understudied, with relevant data often collected late and incompletely. Although these territories account for only 3% to 4% of road traffic injuries in France, their unique characteristics merit closer attention. Despite lower mobility and, consequently, lower exposure to road risks, the actual road risk in Overseas France is as high or even higher than in Metropolitan France. Significant disparities exist not only between Metropolitan France and Overseas territories but also among the overseas territories themselves. The varying population densities in these regions do not fully explain these differences, as each territory has its own distinct vulnerabilities and road safety challenges. This analysis, based on BAAC data files from 2005 to 2018 for both Metropolitan France and the overseas departments and regions, examines key variables such as gender, age, type of road user, type of obstacle hit, type of trip, road category, traffic conditions, weather, and location of accidents. Logistic regression models were built for each region to investigate the risk factors associated with fatal road accidents, focusing on the probability of being killed versus injured. Due to insufficient data, Mayotte and the Overseas Communities (French Polynesia and New Caledonia) were not included in the models. The findings reveal that road safety is worse in the overseas territories compared to Metropolitan France, particularly for vulnerable road users such as pedestrians and motorized two-wheelers. These territories present an accident profile that sits between that of Metropolitan France and middle-income countries. A pressing need exists to standardize accident data collection between Metropolitan and Overseas France to allow for more detailed comparative analyses. Further epidemiological studies could help identify the specific road safety issues unique to each territory, particularly with regard to socio-economic factors such as social cohesion, which may influence road safety outcomes. Moreover, the lack of data on new modes of travel, such as electric scooters, and the absence of socio-economic details of accident victims complicate the evaluation of emerging risk factors. Additional research, including sociological and psychosocial studies, is essential for understanding road users' behavior and perceptions of road risk, which could also provide valuable insights into accident trends in peri-urban areas in France.Keywords: multivariate logistic regression, overseas France, road safety, road traffic accident, territorial inequalities
Procedia PDF Downloads 10218 Optimization Based Design of Decelerating Duct for Pumpjets
Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan
Abstract:
Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization
Procedia PDF Downloads 209217 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 315216 Technological Affordances of a Mobile Fitness Application- A Role of Escapism and Social Outcome Expectation
Authors: Inje Cho
Abstract:
The leading health risks threatening the world today are associated with a modern lifestyle characterized by sedentary behavior, stress, anxiety, and an obesogenic food environment. To counter this alarming trend, the Centers for Disease Control and Prevention have proffered Physical Activity guidelines to bolster physical engagement. Concurrently, the burgeon of smartphones and mobile applications has witnessed a proliferation of fitness applications aimed at invigorating exercise adherence and real-time activity monitoring. Grounded in the Uses and gratification theory, this study delves into the technological affordances of mobile fitness applications, discerning the mediating influences of escapism and social outcome expectations on attitudes and exercise intention. The theory explains how individuals employ distinct communication mediums to satiate their exigencies and desires. Technological affordances manifest as attributes of emerging technologies that galvanize personal engagement in physical activities. Several features of mobile fitness applications include affordances for goal setting, virtual rewards, peer support, and exercise information. Escapism, denoting the inclination to disengage from normal routines, has emerged as a salient motivator for the consumption of new media. This study postulates that individual’s perceptions technological affordances within mobile fitness applications, can affect escapism and social outcome expectations, potentially influencing attitude, and behavior formation. Thus, the integrated model has been developed to empirically examine the interrelationships between technological affordances, escapism, social outcome expectations, and exercise intention. Structural Equation Modelling serves as the methodological tool, and a cohort of 400 Fitbit users shall be enlisted from the Prolific, data collection platform. A sequence of multivariate data analyses will scrutinize both the measurement and hypothesized structural models. By delving into the effects of mobile fitness applications, this study contributes to the growing of new media studies in sport management. Moreover, the novel integration of the uses and gratification theory, technological affordances, via the prism of escapism, illustrates the dynamics that underlies mobile fitness user’s attitudes and behavioral intentions. Therefore, the findings from this study contribute to theoretical understanding and provide pragmatic insights to developers and practitioners in optimizing the impact of mobile fitness applications.Keywords: technological affordances, uses and gratification, mobile fitness apps, escapism, physical activity
Procedia PDF Downloads 80215 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material
Authors: H. M. Alfrihidi, H.A. Albarakaty
Abstract:
Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.Keywords: flattening filter free, monte carlo, radiotherapy, surface dose
Procedia PDF Downloads 73214 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 208213 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment
Authors: Muhammad Muzamal, Anders Rasmuson
Abstract:
Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment
Procedia PDF Downloads 400212 Emotions Evoked by Robots - Comparison of Older Adults and Students
Authors: Stephanie Lehmann, Esther Ruf, Sabina Misoch
Abstract:
Background: Due to demographic change and shortage of skilled nursing staff, assistive robots are built to support older adults at home and nursing staff in care institutions. When assistive robots facilitate tasks that are usually performed by humans, user acceptance is essential. Even though they are an important aspect of acceptance, emotions towards different assistive robots and different situations of robot-use have so far not been examined in detail. The appearance of assistive robots can trigger emotions that affect their acceptance. Acceptance of robots is assumed to be greater when they look more human-like; however, too much human similarity can be counterproductive. Regarding different groups, it is assumed that older adults have a more negative attitude towards robots than younger adults. Within the framework of a simulated robot study, the aim was to investigate emotions of older adults compared to students towards robots with different appearances and in different situations and so contribute to a deeper view of the emotions influencing acceptance. Methods: In a questionnaire study, vignettes were used to assess emotions toward robots in different situations and of different appearance. The vignettes were composed of two situations (service and care) shown by video and four pictures of robots varying in human similarity (machine-like to android). The combination of the vignettes was randomly distributed to the participants. One hundred forty-two older adults and 35 bachelor students of nursing participated. They filled out a questionnaire that surveyed 30 positive and 30 negative emotions. For each group, older adults and students, a sum score of “positive emotions” and a sum score of “negative emotions” was calculated. Mean value, standard deviation, or n for sample size and % for frequencies, according to the scale level, were calculated. For differences in the scores of positive and negative emotions for different situations, t-tests were calculated. Results: Overall, older adults reported significantly more positive emotions than students towards robots in general. Students reported significantly more negative emotions than older adults. Regarding the two different situations, the results were similar for the care situation, with older adults reporting more positive emotions than students and less negative emotions than students. In the service situation, older adults reported significantly more positive emotions; negative emotions did not differ significantly from the students. Regarding the appearance of the robot, there were no significant differences in emotions reported towards the machine-like, the mechanical-human-like and the human-like appearance. Regarding the android robot, students reported significantly more negative emotions than older adults. Conclusion: There were differences in the emotions reported by older adults compared to students. Older adults reported more positive emotions, and students reported more negative emotions towards robots in different situations and with different appearances. It can be assumed that older adults have a different attitude towards the use of robots than younger people, especially young adults in the health sector. Therefore, the use of robots in the service or care sector should not be rejected rashly based on the attitudes of younger persons, without considering the attitudes of older adults equally.Keywords: emotions, robots, seniors, young adults
Procedia PDF Downloads 465211 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 106210 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations
Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu
Abstract:
Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior
Procedia PDF Downloads 104209 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK
Authors: Esther Ekundayo
Abstract:
The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology
Procedia PDF Downloads 97208 Hand Motion Tracking as a Human Computer Interation for People with Cerebral Palsy
Authors: Ana Teixeira, Joao Orvalho
Abstract:
This paper describes experiments using Scratch games, to check the feasibility of employing cerebral palsy users gestures as an alternative of interaction with a computer carried out by students of Master Human Computer Interaction (HCI) of IPC Coimbra. The main focus of this work is to study the usability of a Web Camera as a motion tracking device to achieve a virtual human-computer interaction used by individuals with CP. An approach for Human-computer Interaction (HCI) is present, where individuals with cerebral palsy react and interact with a scratch game through the use of a webcam as an external interaction device. Motion tracking interaction is an emerging technology that is becoming more useful, effective and affordable. However, it raises new questions from the HCI viewpoint, for example, which environments are most suitable for interaction by users with disabilities. In our case, we put emphasis on the accessibility and usability aspects of such interaction devices to meet the special needs of people with disabilities, and specifically people with CP. Despite the fact that our work has just started, preliminary results show that, in general, computer vision interaction systems are very useful; in some cases, these systems are the only way by which some people can interact with a computer. The purpose of the experiments was to verify two hypothesis: 1) people with cerebral palsy can interact with a computer using their natural gestures, 2) scratch games can be a research tool in experiments with disabled young people. A game in Scratch with three levels is created to be played through the use of a webcam. This device permits the detection of certain key points of the user’s body, which allows to assume the head, arms and specially the hands as the most important aspects of recognition. Tests with 5 individuals of different age and gender were made throughout 3 days through periods of 30 minutes with each participant. For a more extensive and reliable statistical analysis, the number of both participants and repetitions in further investigations should be increased. However, already at this stage of research, it is possible to draw some conclusions. First, and the most important, is that simple scratch games on the computer can be a research tool that allows investigating the interaction with computer performed by young persons with CP using intentional gestures. Measurements performed with the assistance of games are attractive for young disabled users. The second important conclusion is that they are able to play scratch games using their gestures. Therefore, the proposed interaction method is promising for them as a human-computer interface. In the future, we plan to include the development of multimodal interfaces that combine various computer vision devices with other input devices improvements in the existing systems to accommodate more the special needs of individuals, in addition, to perform experiments on a larger number of participants.Keywords: motion tracking, cerebral palsy, rehabilitation, HCI
Procedia PDF Downloads 235207 Bringing German History to Tourists
Authors: Gudrun Görlitz, Christian Schölzel, Alexander Vollmar
Abstract:
Sites of Jewish Life in Berlin 1933-1945. Between Persecution and Self-assertion” was realized in a project funded by the European Regional Development Fund. A smartphone app, and a associated web site enable tourists and other participants of this educational offer to learn in a serious way more about the life of Jews in the German capital during the Nazi era. Texts, photos, video and audio recordings communicate the historical content. Interactive maps (both current and historical) make it possible to use predefined or self combined routes. One of the manifold challenges was to create a broad ranged guide, in which all detailed information are well linked with each other. This enables heterogeneous groups of potential users to find a wide range of specific information, corresponding with their particular wishes and interests. The multitude of potential ways to navigate through the diversified information causes (hopefully) the users to utilize app and web site for a second or third time and with a continued interest. Therefore 90 locations, a lot of them situated in Berlin’s city centre, have been chosen. For all of them text-, picture and/or audio/video material gives extensive information. Suggested combinations of several of these “site stories” are leading to the offer of detailed excursion routes. Events and biographies are also presented. A few of the implemented biographies are especially enriched with source material concerning the aspect of (forced) migration of these persons during the Nazi time. All this was done in a close and fruitful interdisciplinary cooperation of computer scientists and historians. The suggested conference paper aims to show the challenges shaping complex source material for practical use by different user-groups in a proper technical and didactic way. Based on the historical research in archives, museums, libraries and digital resources the quantitative dimension of the project can be sized as follows: The paper focuses on the following historiographical and technical aspects: - Shaping the text material didactically for the use in new media, especially a Smartphone-App running on differing platforms; - Geo-referencing of the sites on historical and current map material; - Overlay of old and new maps to present and find the sites; - Using Augmented Reality technologies to re-visualize destroyed buildings; - Visualization of black-/white-picture-material; - Presentation of historical footage and the resulting problems to need too much storage space; - Financial and juridical aspects in gaining copyrights to present archival material.Keywords: smartphone app, history, tourists, German
Procedia PDF Downloads 375206 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method
Authors: Berker Bayazit, Gulgun Kayakutlu
Abstract:
The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy
Procedia PDF Downloads 244205 A pilot Study of Umbilical Cord Mini-Clamp
Authors: Seng Sing Tan
Abstract:
Clamping of the umbilical cord after birth is widely practiced as a part of labor management. Further improvements were proposed to produce a smaller, lighter and more comfortable clamp while still maintaining current standards of clamping. A detachable holder was also developed to facilitate the clamping process. This pilot study on the efficacy of the mini-clamp was conducted to evaluate a tightness of the seal and a firm grip of the clamp on the umbilical cord. The study was carried out at National University Hospital, using 5 sets of placental cord. 18 samples of approximate 10 cm each were harvested. The test results showed that the mini-clamp was able to stop the flow through the cord after clamping without rupturing the cord. All slip tests passed with a load of 0.2 kg. In the pressure testing, 30kPa of saline was exerted into the umbilical veins. Although there was no physical sign of fluid leaking through the end secured by the mini-clamp, the results showed the pressure was not able to sustain the pressure set during the tests. 12 out of the 18 test samples have more than 7% of pressure drop in 30 seconds. During the pressure leak test, it was observed on several samples that when pressurized, small droplets of saline were growing on the outer surface of the cord lining membrane. It was thus hypothesized that the pressure drop was likely caused by the perfusion of the injected saline through the Wharton’s jelly and the cord lining membrane. The average pressure in the umbilical vein is roughly 2.67kPa (20 mmHg), less than 10% of 30kPa (~225mmHg), set for the pressure testing. As such, the pressure set could be over-specified, leading to undesirable outcomes. The development of the mini-clamp was an attempt to increase the comfort of newly born babies while maintaining the usability and efficacy of hospital grade umbilical cord clamp. The pressure leak in this study would be unfair to fully attribute it to the design and efficacy of the mini-clamp. Considering the unexpected leakage of saline through the umbilical membrane due to over-specified pressure exerted on the umbilical veins, improvements can definitely be made to the existing experimental setup to obtain a more accurate and conclusive outcome. If proven conclusive and effective, the mini-clamp with a detachable holder could be a smaller and potentially cheaper alternative to existing umbilical cord clamps. In addition, future clinical trials could be conducted to determine the user-friendliness of the mini-clamp and evaluate its practicality in the clinical setting by labor ward clinicians. A further potential improvement could be proposed on the sustainability factor of the mini-clamp. A biodegradable clamp would revolutionise the industry in this increasingly environmentally sustainability world.Keywords: leak test, mini-clamp, slip test, umbilical cord
Procedia PDF Downloads 132204 Internet Protocol Television: A Research Study of Undergraduate Students Analyze the Effects
Authors: Sabri Serkan Gulluoglu
Abstract:
The study is aimed at examining the effects of internet marketing with IPTV on human beings. Internet marketing with IPTV is emerging as an integral part of business strategies in today’s technologically advanced world and the business activities all over the world are influences with the emergence of this modern marketing tool. As the population of the Internet and on-line users’ increases, new research issues have arisen concerning the demographics and psychographics of the on-line user and the opportunities for a product or service. In recent years, we have seen a tendency of various services converging to the ubiquitous Internet Protocol based networks. Besides traditional Internet applications such as web browsing, email, file transferring, and so forth, new applications have been developed to replace old communication networks. IPTV is one of the solutions. In the future, we expect a single network, the IP network, to provide services that have been carried by different networks today. For finding some important effects of a video based technology market web site on internet, we determine to apply a questionnaire on university students. Recently some researches shows that in Turkey the age of people 20 to 24 use internet when they buy some electronic devices such as cell phones, computers, etc. In questionnaire there are ten categorized questions to evaluate the effects of IPTV when shopping. There were selected 30 students who are filling the question form after watching an IPTV channel video for 10 minutes. This sample IPTV channel is “buy.com”, it look like an e-commerce site with an integrated IPTV channel on. The questionnaire for the survey is constructed by using the Likert scale that is a bipolar scaling method used to measure either positive or negative response to a statement (Likert, R) it is a common system that is used is the surveys. By following the Likert Scale “the respondents are asked to indicate their degree of agreement with the statement or any kind of subjective or objective evaluation of the statement. Traditionally a five-point scale is used under this methodology”. For this study also the five point scale system is used and the respondents were asked to express their opinions about the given statement by picking the answer from the given 5 options: “Strongly disagree, Disagree, Neither agree Nor disagree, Agree and Strongly agree”. These points were also rates from 1-5 (Strongly disagree, Disagree, Neither disagree Nor agree, Agree, Strongly agree). On the basis of the data gathered from the questionnaire some results are drawn in order to get the figures and graphical representation of the study results that can demonstrate the outcomes of the research clearly.Keywords: IPTV, internet marketing, online, e-commerce, video based technology
Procedia PDF Downloads 240203 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt
Abstract:
When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics
Procedia PDF Downloads 144202 Development of Extruded Prawn Snack Using Prawn Flavor Powder from Prawn Head Waste
Authors: S. K. Sharma, P. Kumar, Pratibha Singh
Abstract:
Consumption of SNACK is growing its popularity every day in India and a broad range of these items are available in the market. The end user interest in ready-to-eat snack foods is constantly growing mainly due to their ease, ample accessibility, appearance, taste and texture. Food extrusion has been practiced for over fifty years. Its role was initially limited to mixing and forming cereal products. Although thermoplastic extrusion has been successful for starch products, extrusion of proteins has achieved only limited success. In this study, value-added extruded prawn product was prepared with prawn flavor powder and corn flour using a twin-screw extruder. Prawn flavor concentrates prepared from fresh prawn head (Solenocera indica). To prepare flavor concentrate prawn head washed with potable water and blended with 200ml 3% salt solution per 250gm head weight to make the slurry, which was further put in muslin cloth and boiled with salt and starch solution for 10 minutes, cooled to room temperature and filtered, starch added to the filtrate and made into powder in an electrically drier at 43-450c. The mixture was passed through the twin-screw extruder (co-rotating twin screw extruder - basic technology Pvt. Ltd., Kolkata) which was operated at a particular speed of rotation, die diameter, temperature, moisture, and fish powder concentration. Many trial runs were conducted to set up the process variables. The different extrudes produced after each trail were examined for the quality and characteristics. The effect of temperature, moisture, screw speed, protein, fat, ash and thiobarbituric acid (TBA) number and expansion ratio were studied. In all the four trials, moisture, temperature, speed and die diameter used was 20%, 100°C, 350 rpm and 4 mm, respectively. The ratio of prawn powder and cornstarch used in different trials ranged between 2:98 and 10:90. The storage characteristics of the final product were studied using three different types of packaging under nitrogen flushing, i.e. a- 12-pm polyester, 12-pm metalized polyester, 60-11m polyethylene (metalized polyester a), b- 12-11m metalized polyester, 37.5-11m polyethylene (metalized polyester b), c- 12-11m polyethylene, 9-11m aluminium foil, 37.5-11m polyethylene (aluminium foil). The organoleptic analysis was carried out on a 9-point hedonic scale. The study revealed that the fried product packed in aluminum foil under nitrogen flushing would remain acceptable for more than three months.Keywords: extruded product, prawn flavor, twin-screw extruder, storage characteristics
Procedia PDF Downloads 140201 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings
Authors: Stefan Peters, Phillip Roetman
Abstract:
The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor
Procedia PDF Downloads 115200 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector
Authors: Aron Witkowski, Andrzej Wodecki
Abstract:
Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing
Procedia PDF Downloads 49199 Research on the Optimization of Satellite Mission Scheduling
Authors: Pin-Ling Yin, Dung-Ying Lin
Abstract:
Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling
Procedia PDF Downloads 25198 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India
Authors: Sayantan Khanra, Rojers P. Joseph
Abstract:
Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions
Procedia PDF Downloads 267197 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183196 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami
Authors: Alejandra G. Barbón, José Vila, Dania Vazquez
Abstract:
This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience
Procedia PDF Downloads 73195 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 55194 The Debureaucratization Strategy for the Portuguese Health Service through Effective Communication
Authors: Fernando Araujo, Sandra Cardoso, Fátima Fonseca, Sandra Cavaca
Abstract:
A debureaucratization strategy for the Portuguese Health Service was assumed by the Executive Board of the SNS, in deep articulation with the Shared Services of the Ministry of Health. Two of the main dimensions were focused on sick leaves (SL), that transform primary health care (PHC) in administrative institutions, limiting access to patients. The self-declaration of illness (SDI) project, through the National Health Service Contact Centre (SNS24), began on May 1, 2023, and has already resulted in the issuance of more than 300,000 SDI without the need to allocate resources from the National Health Service (NHS). This political decision allows each citizen, in a maximum 2 times/year, and 3 days each time, if ill, through their own responsibility, report their health condition in a dematerialized way, and by this way justified the absence to work, although by Portuguese law in these first three days, there is no payment of salary. Using a digital approach, it is now feasible without the need to go to the PHC and occupy the time of the PHC only to obtain an SL. Through this measure, bureaucracy has been reduced, and the system has been focused on users, improving the lives of citizens and reducing the administrative burden on PHC, which now has more consultation times for users who need it. The second initiative, which began on March 1, 2024, allows the SL to be issued in emergency departments (ED) of public hospitals and in the health institutions of the social and private sectors. This project is intended to allow the user who has suffered a situation of acute urgent illness and who has been observed in an ED of a public hospital or in a private or social entity no longer need to go to PHC only to apply for the respective SL. Since March 1, 54,453 SLs have been issued, 242 in private or social sector institutions and 6,918 in public hospitals, of which 134 were in ED and 47,292 in PHC. This approach has proven to be technically robust, allows immediate resolution of problems and differentiates the performance of doctors. However, it is important to continue to qualify the proper functioning of the ED, preventing non-urgent users from going there only to obtain SL. Thus, in order to make better use of existing resources, it was operationalizing this extension of its issuance in a balanced way, allowing SL to be issued in the ED of hospitals only to critically ill patients or patients referred by INEM, SNS24, or PHC. In both cases, an intense public campaign was implemented to explain the way it works and the benefits for patients. In satisfaction surveys, more than 95% of patients and doctors were satisfied with the solutions, asking for extensions to other areas. The administrative simplification agenda of the NHS continues its effective development. For the success of this debureaucratization agenda, the key factors are effective communication and the ability to reach patients and health professionals in order to increase health literacy and the correct use of NHS.Keywords: debureaucratization strategy, self-declaration of illness, sick leaves, SNS24
Procedia PDF Downloads 71193 Online Guidance and Counselling Needs and Preferences of University Undergraduates in a Nigerian University
Authors: Olusegun F. Adebowale
Abstract:
Research has confirmed that the emergence of information technology is significantly reflected in the field of psychology and its related disciplines due to its widespread use at reasonable price and its user-friendliness. It is consequently affecting ordinary life in many areas like shopping, advertising, corresponding and educating. Specifically the innovations of computer technology led to several new forms of communication, all with implications and applicability for counselling and psychotherapy practices. This is premise on which online counselling is based. Most institutions of higher learning in Nigeria have established their presence on the Internet and have deployed a variety of applications through ICT. Some are currently attempting to include counselling services in such applications with the belief that many counselling needs of students are likely to be met. This study therefore explored different challenges and preferences students present in online counselling interaction in a given Nigerian university with the view to guide new universities that may want to invest into these areas as to necessary preparations and referral requirements. The study is a mixed method research incorporating qualitative and quantitative methodologies to sample the preferences and concerns students express in online interaction. The sample comprised all the 876 students who visited the university online counselling platform either voluntarily, by invitation or by referral. The instrument for data collection was the online counselling platform of the university 'OAU Online counsellors'. The period of data collection spanned between January 2011 and October 2012. Data were analysed quantitatively (using percentages and Mann-Whitney U test) and qualitatively (using Interpretative Phenomenological Analysis (IPA)). The results showed that the students seem to prefer real-time chatting as their online medium of communicating with the online counsellor. The majority of students resorted to e-mail when their effort to use real-time chatting were becoming thwarted. Also, students preferred to enter into online counselling relationships voluntarily to other modes of entry. The results further showed that the prevalent counselling needs presented by students during online counselling sessions were mainly in the areas of social interaction and academic/educational concerns. Academic concerns were found to be prevalent, in form of course offerings, studentship matters and academic finance matters. The personal/social concerns were in form of students’ welfare, career related concerns and relationship matters. The study concludes students’ preferences include voluntary entry into online counselling, communication by real-time chatting and a specific focus on their academic concerns. It also recommends that all efforts should be made to encourage students’ voluntary entry into online counselling through reliable and stable internet infrastructure that will be able to support real-time chatting.Keywords: online, counselling, needs, preferences
Procedia PDF Downloads 290192 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans
Authors: Rene Hellmuth
Abstract:
Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.Keywords: building information modeling, digital factory model, factory planning, restructuring
Procedia PDF Downloads 114