Search results for: virtual grid
1300 Evaluating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management
Authors: Mirindi Derrick, Mirindi Frederic, Oluwakemi Oshineye
Abstract:
Abstract: The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software
Procedia PDF Downloads 501299 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load
Authors: Sanjin Kršćanski, Josip Brnić
Abstract:
Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending
Procedia PDF Downloads 3051298 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation
Authors: Virendra Nath, Vipin Kumar
Abstract:
Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.Keywords: computational, diabetes, PPAR, simulation
Procedia PDF Downloads 1031297 Nighttime Power Generation Using Thermoelectric Devices
Authors: Abdulrahman Alajlan
Abstract:
While the sun serves as a robust energy source, the frigid conditions of outer space present promising prospects for nocturnal power generation due to its continuous accessibility during nighttime hours. This investigation illustrates a proficient methodology facilitating uninterrupted energy capture throughout the day. This method involves the utilization of water-based heat storage systems and radiative thermal emitters implemented across thermometric devices. Remarkably, this approach permits an enhancement of nighttime power generation that exceeds the level of 1 Wm-2, which is unattainable by alternative methodologies. Outdoor experiments conducted at the King Abdulaziz City for Science and Technology (KACST) have demonstrated unparalleled performance, surpassing prior experimental benchmarks by nearly an order of magnitude. Furthermore, the developed device exhibits the capacity to concurrently supply power to multiple light-emitting diodes, thereby showcasing practical applications for nighttime power generation. This research unveils opportunities for the creation of scalable and efficient 24-hour power generation systems based on thermoelectric devices. Central findings from this study encompass the realization of continuous 24-hour power generation from clean and sustainable energy sources. Theoretical analyses indicate the potential for nighttime power generation reaching up to 1 Wm-2, while experimental results have reached nighttime power generation at a density of 0.5 Wm-2. Additionally, the efficiency of multiple light-emitting diodes (LEDs) has been evaluated when powered by the nighttime output of the integrated thermoelectric generator (TEG). Therefore, this methodology exhibits promise for practical applications, particularly in lighting, marking a pivotal advancement in the utilization of renewable energy for both on-grid and off-grid scenarios.Keywords: nighttime power generation, thermoelectric devices, radiative cooling, thermal management
Procedia PDF Downloads 601296 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method
Authors: Mamidi Ramakrishna Rao
Abstract:
Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.Keywords: direct search, DFIG, equivalent circuit parameters, optimization
Procedia PDF Downloads 2561295 The Role of Virtual Reality in Mediating the Vulnerability of Distant Suffering: Distance, Agency, and the Hierarchies of Human Life
Authors: Z. Xu
Abstract:
Immersive virtual reality (VR) has gained momentum in humanitarian communication due to its utopian promises of co-presence, immediacy, and transcendence. These potential benefits have led the United Nations (UN) to tirelessly produce and distribute VR series to evoke global empathy and encourage policymakers, philanthropic business tycoons and citizens around the world to actually do something (i.e. give a donation). However, it is unclear whether or not VR can cultivate cosmopolitans with a sense of social responsibility towards the geographically, socially/culturally and morally mediated misfortune of faraway others. Drawing upon existing works on the mediation of distant suffering, this article constructs an analytical framework to articulate the issue. Applying this framework on a case study of five of the UN’s VR pieces, the article identifies three paradoxes that exist between cyber-utopian and cyber-dystopian narratives. In the “paradox of distance”, VR relies on the notions of “presence” and “storyliving” to implicitly link audiences spatially and temporally to distant suffering, creating global connectivity and reducing perceived distances between audiences and others; yet it also enables audiences to fully occupy the point of view of distant sufferers (creating too close/absolute proximity), which may cause them to feel naive self-righteousness or narcissism with their pleasures and desire, thereby destroying the “proper distance”. In the “paradox of agency”, VR simulates a superficially “real” encounter for visual intimacy, thereby establishing an “audiences–beneficiary” relationship in humanitarian communication; yet in this case the mediated hyperreality is not an authentic reality, and its simulation does not fill the gap between reality and the virtual world. In the “paradox of the hierarchies of human life”, VR enables an audience to experience virtually fundamental “freedom”, epitomizing an attitude of cultural relativism that informs a great deal of contemporary multiculturalism, providing vast possibilities for a more egalitarian representation of distant sufferers; yet it also takes the spectator’s personally empathic feelings as the focus of intervention, rather than structural inequality and political exclusion (an economic and political power relations of viewing). Thus, the audience can potentially remain trapped within the minefield of hegemonic humanitarianism. This study is significant in two respects. First, it advances the turn of digitalization in studies of media and morality in the polymedia milieu; it is motivated by the necessary call for a move beyond traditional technological environments to arrive at a more novel understanding of the asymmetry of power between the safety of spectators and the vulnerability of mediated sufferers. Second, it not only reminds humanitarian journalists and NGOs that they should not rely entirely on the richer news experience or powerful response-ability enabled by VR to gain a “moral bond” with distant sufferers, but also argues that when fully-fledged VR technology is developed, it can serve as a kind of alchemy and should not be underestimated merely as a “bugaboo” of an alarmist philosophical and fictional dystopia.Keywords: audience, cosmopolitan, distant suffering, virtual reality, humanitarian communication
Procedia PDF Downloads 1431294 Accuracy of Peak Demand Estimates for Office Buildings Using Quick Energy Simulation Tool
Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett
Abstract:
The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, U.S. NJDMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.Keywords: building energy modeling, eQUEST, peak demand, smart meters
Procedia PDF Downloads 681293 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 3811292 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology
Authors: A. Anastasiou, K. S. Tingay
Abstract:
Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.Keywords: data reuse, data discovery, data linkage, journal articles, text mining
Procedia PDF Downloads 1151291 Global Healthcare Village Based on Mobile Cloud Computing
Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar
Abstract:
Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy
Procedia PDF Downloads 3771290 When Digital Innovation Augments Cultural Heritage: An Innovation from Tradition Story
Authors: Danilo Pesce, Emilio Paolucci, Mariolina Affatato
Abstract:
Looking at the future and at the post-digital era, innovations commonly tend to dismiss the old and replace it with the new. The aim of this research is to study the role that digital innovation can play alongside the information chain within the traditional sectors and the subsequent value creation opportunities that actors and stakeholders can exploit. By drawing on a wide body of literature on innovation and strategic management and by conducting a case study on the cultural heritage industry, namely Google Arts & Culture, this study shows that technology augments complements, and amplifies the way people experience their cultural interests and experience. Furthermore, the study shows a process of democratization of art since museums can exploit new digital and virtual ways to distribute art globally. Moreover, new needs arose from the 2020 pandemic that hit and forced the world to a state of cultural fasting and caused a radical transformation of the paradigm online vs. onsite. Finally, the study highlights the capabilities that are emerging at different stages of the value chain, owing to the technological innovation available in the market. In essence, this research underlines the role of Google in allowing museums to reach users worldwide, thus unlocking new mechanisms of value creation in the cultural heritage industry. Likewise, this study points out how Google provides value to users by means of increasing the provision of artworks, improving the audience engagement and virtual experience, and providing new ways to access the online contents. The paper ends with a discussion of managerial and policy-making implications.Keywords: big data, digital platforms, digital transformation, digitization, Google Arts and Culture, stakeholders’ interests
Procedia PDF Downloads 1571289 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 2031288 Scenario-Based Learning Using Virtual Optometrist Applications
Authors: J. S. M. Yang, G. E. T. Chua
Abstract:
Diploma in Optometry (OPT) course is a three-year program offered by Ngee Ann Polytechnic (NP) to train students to provide primary eye care. Students are equipped with foundational conceptual knowledge and practical skills in the first three semesters before clinical modules in fourth to six semesters. In the clinical modules, students typically have difficulties in integrating the acquired knowledge and skills from the past semesters to perform general eye examinations on public patients at NP Optometry Centre (NPOC). To help the students overcome the challenge, a web-based game Virtual Optometrist (VO) was developed to help students apply their skills and knowledge through scenario-based learning. It consisted of two interfaces, Optical Practice Counter (OPC) and Optometric Consultation Room (OCR), to provide two simulated settings for authentic learning experiences. In OPC, students would recommend and provide appropriate frame and lens selection based on virtual patient’s case history. In OCR, students would diagnose and manage virtual patients with common ocular conditions. Simulated scenarios provided real-world clinical situations that required contextual application of integrated knowledge from relevant modules. The stages in OPC and OCR are of increasing complexity to align to expected students’ clinical competency as they progress to more senior semesters. This prevented gameplay fatigue as VO was used over the semesters to achieve different learning outcomes. Numerous feedback opportunities were provided to students based on their decisions to allow individualized learning to take place. The game-based learning element in VO was achieved through the scoreboard and leader board to enhance students' motivation to perform. Scores were based on the speed and accuracy of students’ responses to the questions posed in the simulated scenarios, preparing the students to perform accurately and effectively under time pressure in a realistic optometric environment. Learning analytics was generated in VO’s backend office based on students’ responses, offering real-time data on distinctive and observable learners’ behavior to monitor students’ engagement and learning progress. The backend office allowed versatility to add, edit, and delete scenarios for different intended learning outcomes. Likert Scale was used to measure students’ learning experience with VO for OPT Year 2 and 3 students. The survey results highlighted the learning benefits of implementing VO in the different modules, such as enhancing recall and reinforcement of clinical knowledge for contextual application to develop higher-order thinking skills, increasing efficiency in clinical decision-making, facilitating learning through immediate feedback and second attempts, providing exposure to common and significant ocular conditions, and training effective communication skills. The results showed that VO has been useful in reinforcing optometry students’ learning and supporting the development of higher-order thinking, increasing efficiency in clinical decision-making, and allowing students to learn from their mistakes with immediate feedback and second attempts. VO also exposed the students to diverse ocular conditions through simulated real-world clinical scenarios, which may otherwise not be encountered in NPOC, and promoted effective communication skills.Keywords: authentic learning, game-based learning, scenario-based learning, simulated clinical scenarios
Procedia PDF Downloads 1171287 A Research Study of the Inclusiveness of VR Headsets for Higher Education
Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe
Abstract:
This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.Keywords: virtual reality, education technology, inclusive technology, higher education
Procedia PDF Downloads 681286 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors
Authors: Louise Ödlund, Danica Djuric Ilic
Abstract:
Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.Keywords: energy system, district heating, sustainable business strategies, sustainable development
Procedia PDF Downloads 1691285 Design and Implementation of Collaborative Editing System Based on Physical Simulation Engine Running State
Authors: Zhang Songning, Guan Zheng, Ci Yan, Ding Gangyi
Abstract:
The application of physical simulation engines in collaborative editing systems has an important background and role. Firstly, physical simulation engines can provide real-world physical simulations, enabling users to interact and collaborate in real time in virtual environments. This provides a more intuitive and immersive experience for collaborative editing systems, allowing users to more accurately perceive and understand various elements and operations in collaborative editing. Secondly, through physical simulation engines, different users can share virtual space and perform real-time collaborative editing within it. This real-time sharing and collaborative editing method helps to synchronize information among team members and improve the efficiency of collaborative work. Through experiments, the average model transmission speed of a single person in the collaborative editing system has increased by 141.91%; the average model processing speed of a single person has increased by 134.2%; the average processing flow rate of a single person has increased by 175.19%; the overall efficiency improvement rate of a single person has increased by 150.43%. With the increase in the number of users, the overall efficiency remains stable, and the physical simulation engine running status collaborative editing system also has horizontal scalability. It is not difficult to see that the design and implementation of a collaborative editing system based on physical simulation engines not only enriches the user experience but also optimizes the effectiveness of team collaboration, providing new possibilities for collaborative work.Keywords: physics engine, simulation technology, collaborative editing, system design, data transmission
Procedia PDF Downloads 861284 Performance Comparison of Droop Control Methods for Parallel Inverters in Microgrid
Authors: Ahmed Ismail, Mustafa Baysal
Abstract:
Although the energy source in the world is mainly based on fossil fuels today, there is a need for alternative energy generation systems, which are more economic and environmentally friendly, due to continuously increasing demand of electric energy and lacking power resources and networks. Distributed Energy Resources (DERs) such as fuel cells, wind and solar power have recently become widespread as alternative generation. In order to solve several problems that might be encountered when integrating DERs to power system, the microgrid concept has been proposed. A microgrid can operate both grid connected and island mode to benefit both utility and customers. For most distributed energy resources (DER) which are connected in parallel in LV-grid like micro-turbines, wind plants, fuel cells and PV cells electrical power is generated as a direct current (DC) and converted to an alternative currents (AC) by inverters. So the inverters are assumed to be primary components in a microgrid. There are many control techniques of parallel inverters to manage active and reactive sharing of the loads. Some of them are based on droop method. In literature, the studies are usually focused on improving the transient performance of inverters. In this study, the performance of two different controllers based on droop control method is compared for the inverters operated in parallel without any communication feedback. For this aim, a microgrid in which inverters are controlled by conventional droop controller and modified droop controller is designed. Modified controller is obtained by adding PID into conventional droop control. Active and reactive power sharing performance, voltage and frequency responses of those control methods are measured in several operational cases. Study cases have been simulated by MATLAB-SIMULINK.Keywords: active and reactive power sharing, distributed generation, droop control, microgrid
Procedia PDF Downloads 5921283 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis
Authors: Nayak Amar, Turner Naomi, Gobina Edward
Abstract:
The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting
Procedia PDF Downloads 1001282 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 751281 In silico Designing of Imidazo [4,5-b] Pyridine as a Probable Lead for Potent Decaprenyl Phosphoryl-β-D-Ribose 2′-Epimerase (DprE1) Inhibitors as Antitubercular Agents
Authors: Jineetkumar Gawad, Chandrakant Bonde
Abstract:
Tuberculosis (TB) is a major worldwide concern whose control has been exacerbated by HIV, the rise of multidrug-resistance (MDR-TB) and extensively drug resistance (XDR-TB) strains of Mycobacterium tuberculosis. The interest for newer and faster acting antitubercular drugs are more remarkable than any time. To search potent compounds is need and challenge for researchers. Here, we tried to design lead for inhibition of Decaprenyl phosphoryl-β-D-ribose 2′-epimerase (DprE1) enzyme. Arabinose is an essential constituent of mycobacterial cell wall. DprE1 is a flavoenzyme that converts decaprenylphosphoryl-D-ribose into decaprenylphosphoryl-2-keto-ribose, which is intermediate in biosynthetic pathway of arabinose. Latter, DprE2 converts keto-ribose into decaprenylphosphoryl-D-arabinose. We had a selection of 23 compounds from azaindole series for computational study, and they were drawn using marvisketch. Ligands were prepared using Maestro molecular modeling interface, Schrodinger, v10.5. Common pharmacophore hypotheses were developed by applying dataset thresholds to yield active and inactive set of compounds. There were 326 hypotheses were developed. On the basis of survival score, ADRRR (Survival Score: 5.453) was selected. Selected pharmacophore hypotheses were subjected to virtual screening results into 1000 hits. Hits were prepared and docked with protein 4KW5 (oxydoreductase inhibitor) was downloaded in .pdb format from RCSB Protein Data Bank. Protein was prepared using protein preparation wizard. Protein was preprocessed, the workspace was analyzed using force field OPLS 2005. Glide grid was generated by picking single atom in molecule. Prepared ligands were docked with prepared protein 4KW5 using Glide docking. After docking, on the basis of glide score top-five compounds were selected, (5223, 5812, 0661, 0662, and 2945) and the glide docking score (-8.928, -8.534, -8.412, -8.411, -8.351) respectively. There were interactions of ligand and protein, specifically HIS 132, LYS 418, TRY 230, ASN 385. Pi-pi stacking was observed in few compounds with basic Imidazo [4,5-b] pyridine ring. We had basic azaindole ring in parent compounds, but after glide docking, we received compounds with Imidazo [4,5-b] pyridine as a basic ring. That might be the new lead in the process of drug discovery.Keywords: DprE1 inhibitors, in silico drug designing, imidazo [4, 5-b] pyridine, lead, tuberculosis
Procedia PDF Downloads 1541280 3D Numerical Studies and Design Optimization of a Swallowtail Butterfly with Twin Tail
Authors: Arunkumar Balamurugan, G. Soundharya Lakshmi, V. Thenmozhi, M. Jegannath, V. R. Sanal Kumar
Abstract:
Aerodynamics of insects is of topical interest in aeronautical industries due to its wide applications on various types of Micro Air Vehicles (MAVs). Note that the MAVs are having smaller geometric dimensions operate at significantly lower speeds on the order of 10 m/s and their Reynolds numbers range is approximately 1,50,000 or lower. In this paper, numerical study has been carried out to capture the flow physics of a biological inspired Swallowtail Butterfly with fixed wing having twin tail at a flight speed of 10 m/s. Comprehensive numerical simulations have been carried out on swallow butterfly with twin tail flying at a speed of 10 m/s with uniform upper and lower angles of attack in both lateral and longitudinal position for identifying the best wing orientation with better aerodynamic efficiency. Grid system in the computational domain is selected after a detailed grid refinement exercises. Parametric analytical studies have been carried out with different lateral and longitudinal angles of attack for finding the better aerodynamic efficiency at the same flight speed. The results reveal that lift coefficient significantly increases with marginal changes in the longitudinal angle and vice versa. But in the case of drag coefficient the conventional changes have been noticed, viz., drag increases at high longitudinal angles. We observed that the change of twin tail section has a significant impact on the formation of vortices and aerodynamic efficiency of the MAV’s. We concluded that for every lateral angle there is an exact longitudinal orientation for the existence of an aerodynamically efficient flying condition of any MAV. This numerical study is a pointer towards for the design optimization of Twin tail MAVs with flapping wings.Keywords: aerodynamics of insects, MAV, swallowtail butterfly, twin tail MAV design
Procedia PDF Downloads 3951279 Freshwater Pinch Analysis for Optimal Design of the Photovoltaic Powered-Pumping System
Authors: Iman Janghorban Esfahani
Abstract:
Due to the increased use of irrigation in agriculture, the importance and need for highly reliable water pumping systems have significantly increased. The pumping of the groundwater is essential to provide water for both drip and furrow irrigation to increase the agricultural yield, especially in arid regions that suffer from scarcities of surface water. The most common irrigation pumping systems (IPS) consume conventional energies through the use of electric motors and generators or connecting to the electricity grid. Due to the shortage and transportation difficulties of fossil fuels, and unreliable access to the electricity grid, especially in the rural areas, and the adverse environmental impacts of fossil fuel usage, such as greenhouse gas (GHG) emissions, the need for renewable energy sources such as photovoltaic systems (PVS) as an alternative way of powering irrigation pumping systems is urgent. Integration of the photovoltaic systems with irrigation pumping systems as the Photovoltaic Powered-Irrigation Pumping System (PVP-IPS) can avoid fossil fuel dependency and the subsequent greenhouse gas emissions, as well as ultimately lower energy costs and improve efficiency, which made PVP-IPS systems as an environmentally and economically efficient solution for agriculture irrigation in every region. The greatest problem faced by integration of PVP with IPS systems is matching the intermittence of the energy supply with the dynamic water demand. The best solution to overcome the intermittence is to incorporate a storage system into the PVP-IPS to provide water-on-demand as a highly reliable stand-alone irrigation pumping system. The water storage tank (WST) is the most common storage device for PVP-IPS systems. In the integrated PVP-IPS with a water storage tank (PVP-IPS-WST), a water storage tank stores the water pumped by the IPS in excess of the water demand and then delivers it when demands are high. The Freshwater pinch analysis (FWaPA) as an alternative to mathematical modeling was used by other researchers for retrofitting the off-grid battery less photovoltaic-powered reverse osmosis system. However, the Freshwater pinch analysis has not been used to integrate the photovoltaic systems with irrigation pumping system with water storage tanks. In this study, FWaPA graphical and numerical tools were used for retrofitting an existing PVP-IPS system located in Salahadin, Republic of Iraq. The plant includes a 5 kW submersible water pump and 7.5 kW solar PV system. The Freshwater Composite Curve as the graphical tool and Freashwater Storage Cascade Table as the numerical tool were constructed to determine the minimum required outsourced water during operation, optimal amount of delivered electricity to the water pump, and optimal size of the water storage tank for one-year operation data. The results of implementing the FWaPA on the case study show that the PVP-IPS system with a WST as the reliable system can reduce outsourced water by 95.41% compare to the PVP-IPS system without storage tank.Keywords: irrigation, photovoltaic, pinch analysis, pumping, solar energy
Procedia PDF Downloads 1381278 The Intersection of Art and Technology: Innovations in Visual Communication Design
Authors: Sareh Enjavi
Abstract:
In recent years, the field of visual communication design has seen a significant shift in the way that art is created and consumed, with the advent of new technologies like virtual reality, augmented reality, and artificial intelligence. This paper explores the ways in which technology is changing the landscape of visual communication design, and how designers are incorporating new technological tools into their artistic practices. The primary objective of this research paper is to investigate the ways in which technology is influencing the creative process of designers and artists in the field of visual communication design. The paper also aims to examine the challenges and limitations that arise from the intersection of art and technology in visual communication design, and to identify strategies for overcoming these challenges. Drawing on examples from a range of fields, including advertising, fine art, and digital media, this paper highlights the exciting innovations that are emerging as artists and designers use technology to push the boundaries of traditional artistic expression. The paper argues that embracing technological innovation is essential for the continued evolution of visual communication design. By exploring the intersection of art and technology, designers can create new and exciting visual experiences that engage and inspire audiences in new ways. The research also contributes to the theoretical and methodological understanding of the intersection of art and technology, a topic that has gained significant attention in recent years. Ultimately, this paper emphasizes the importance of embracing innovation and experimentation in the field of visual communication design, and highlights the exciting innovations that are emerging as a result of the intersection of art and technology, and emphasizes the importance of embracing innovation and experimentation in the field of visual communication design.Keywords: visual communication design, art and technology, virtual reality, interactive art, creative process
Procedia PDF Downloads 1181277 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 1051276 Role of Energy Storage in Renewable Electricity Systems in The Gird of Ethiopia
Authors: Dawit Abay Tesfamariam
Abstract:
Ethiopia’s Climate- Resilient Green Economy (ECRGE) strategy focuses mainly on generating and proper utilization of renewable energy (RE). Nonetheless, the current electricity generation of the country is dominated by hydropower. The data collected in 2016 by Ethiopian Electric Power (EEP) indicates that the intermittent RE sources from solar and wind energy were only 8 %. On the other hand, the EEP electricity generation plan in 2030 indicates that 36.1 % of the energy generation share will be covered by solar and wind sources. Thus, a case study was initiated to model and compute the balance and consumption of electricity in three different scenarios: 2016, 2025, and 2030 using the EnergyPLAN Model (EPM). Initially, the model was validated using the 2016 annual power-generated data to conduct the EnergyPLAN (EP) analysis for two predictive scenarios. The EP simulation analysis using EPM for 2016 showed that there was no significant excess power generated. Thus, the EPM was applied to analyze the role of energy storage in RE in Ethiopian grid systems. The results of the EP simulation analysis showed there will be excess production of 402 /7963 MW average and maximum, respectively, in 2025. The excess power was in the three rainy months of the year (June, July, and August). The outcome of the model also showed that in the dry seasons of the year, there would be excess power production in the country. Consequently, based on the validated outcomes of EP indicates, there is a good reason to think about other alternatives for the utilization of excess energy and storage of RE. Thus, from the scenarios and model results obtained, it is realistic to infer that if the excess power is utilized with a storage system, it can stabilize the grid system and be exported to support the economy. Therefore, researchers must continue to upgrade the current and upcoming storage system to synchronize with potentials that can be generated from renewable energy.Keywords: renewable energy, power, storage, wind, energy plan
Procedia PDF Downloads 771275 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 3121274 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method
Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya
Abstract:
This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.Keywords: particle size reduction, micromixer, FDM modelling, wet etching
Procedia PDF Downloads 4311273 I, Me and the Bot: Forming a Theory of Symbolic Interactivity with a Chatbot
Authors: Felix Liedel
Abstract:
The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity
Procedia PDF Downloads 521272 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design
Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi
Abstract:
The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening
Procedia PDF Downloads 221271 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks
Authors: Antonio Pizzarello, Oris Friesen
Abstract:
Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition
Procedia PDF Downloads 223