Search results for: power consumption
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9026

Search results for: power consumption

476 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 64
475 Alkali Activation of Fly Ash, Metakaolin and Slag Blends: Fresh and Hardened Properties

Authors: Weiliang Gong, Lissa Gomes, Lucile Raymond, Hui Xu, Werner Lutze, Ian L. Pegg

Abstract:

Alkali-activated materials, particularly geopolymers, have attracted much interest in academia. Commercial applications are on the rise, as well. Geopolymers are produced typically by a reaction of one or two aluminosilicates with an alkaline solution at room temperature. Fly ash is an important aluminosilicate source. However, using low-Ca fly ash, the byproduct of burning hard or black coal reacts and sets slowly at room temperature. The development of mechanical durability, e.g., compressive strength, is slow as well. The use of fly ashes with relatively high contents ( > 6%) of unburned carbon, i.e., high loss on ignition (LOI), is particularly disadvantageous as well. This paper will show to what extent these impediments can be mitigated by mixing the fly ash with one or two more aluminosilicate sources. The fly ash used here is generated at the Orlando power plant (Florida, USA). It is low in Ca ( < 1.5% CaO) and has a high LOI of > 6%. The additional aluminosilicate sources are metakaolin and blast furnace slag. Binary fly ash-metakaolin and ternary fly ash-metakaolin-slag geopolymers were prepared. Properties of geopolymer pastes before and after setting have been measured. Fresh mixtures of aluminosilicates with an alkaline solution were studied by Vicat needle penetration, rheology, and isothermal calorimetry up to initial setting and beyond. The hardened geopolymers were investigated by SEM/EDS and the compressive strength was measured. Initial setting (fluid to solid transition) was indicated by a rapid increase in yield stress and plastic viscosity. The rheological times of setting were always smaller than the Vicat times of setting. Both times of setting decreased with increasing replacement of fly ash with blast furnace slag in a ternary fly ash-metakaolin-slag geopolymer system. As expected, setting with only Orlando fly ash was the slowest. Replacing 20% fly ash with metakaolin shortened the set time. Replacing increasing fractions of fly ash in the binary system by blast furnace slag (up to 30%) shortened the time of setting even further. The 28-day compressive strength increased drastically from < 20 MPa to 90 MPa. The most interesting finding relates to the calorimetric measurements. The use of two or three aluminosilicates generated significantly more heat (20 to 65%) than the calculated from the weighted sum of the individual aluminosilicates. This synergetic heat contributes or may be responsible for most of the increase of compressive strength of our binary and ternary geopolymers. The synergetic heat effect may be also related to increased incorporation of calcium in sodium aluminosilicate hydrate to form a hybrid (N,C)A-S-H) gel. The time of setting will be correlated with heat release and maximum heat flow.

Keywords: alkali-activated materials, binary and ternary geopolymers, blends of fly ash, metakaolin and blast furnace slag, rheology, synergetic heats

Procedia PDF Downloads 116
474 A Blueprint for Responsible Launch of Small Satellites from a Debris Perspective

Authors: Jeroen Rotteveel, Zeger De Groot

Abstract:

The small satellite community is more and more aware of the need to start operating responsibly and sustainably in order to secure the use of outer space in the long run. On the technical side, many debris mitigation techniques have been investigated and demonstrated on board small satellites, showing that technically, a lot of things can be done to curb the growth of space debris and operate more responsible. However, in the absence of strict laws and constraints, one cannot help but wonder what the incentive is to incur significant costs (paying for debris mitigation systems and the launch mass of these systems) and to lose performance onboard resource limited small satellites (mass, volume, power)? Many small satellite developers are operating under tight budgets, either from their sponsors (in case of academic and research projects) or from their investors (in case of startups). As long as it is not mandatory to act more responsibly, we might need to consider the implementation of incentives to stimulate developers to accommodate deorbiting modules, etc. ISISPACE joined the NetZeroSpace initiative in 2021 with the aim to play its role in secure the use of low earth orbit for the next decades by facilitating more sustainable use of space. The company is in a good position as both a satellite builder, a rideshare launch provider, and a technology development company. ISISPACE operates under one of the stricter space laws in the world in terms of maximum orbital lifetime and has been active in various debris mitigation and debris removal in-orbit demonstration missions in the past 10 years. ISISPACE proposes to introduce together with launch partners and regulators an incentive scheme for CubeSat developers to baseline debris mitigation systems on board their CubeSats in such a way that is does not impose too many additional costs to the project. Much like incentives to switch to electric cars or install solar panels on your house, such an incentive can help to increase market uptake of behavior or solutions prior to legislation or bans of certain practices. This can be achieved by: Introducing an extended launch volume in CubeSat deployers to accommodate debris mitigation systems without compromising available payload space for the payload of the main mission Not charging the fee for the launch mass for the additional debris mitigation module Whenever possible, find ways to further co-fund the purchase price, or otherwise reduce the cost of flying debris mitigation modules onboard the CubeSats. The paper will outline the framework of such an incentive scheme and provides ISISPACE’s way forward to make this happen in the near future.

Keywords: netZerospace, cubesats, debris mitigation, small satellite community

Procedia PDF Downloads 154
473 Islamic Extremist Groups' Usage of Populism in Social Media to Radicalize Muslim Migrants in Europe

Authors: Muhammad Irfan

Abstract:

The rise of radicalization within Islam has spawned a new era of global terror. The battlefield Successes of ISIS and the Taliban are fuelled by an ideological war waged, largely and successfully, in the media arena. This research will examine how Islamic extremist groups are using media modalities and populist narratives to influence migrant Muslim populations in Europe towards extremism. In 2014, ISIS shocked the world in exporting horrifically graphic forms of violence on social media. Their Muslim support base was largely disgusted and reviled. In response, they reconfigured their narrative by introducing populist 'hooks', astutely portraying the Muslim populous as oppressed and exploited by unjust, corrupt autocratic regimes and Western power structures. Within this crucible of real and perceived oppression, hundreds of thousands of the most desperate, vulnerable and abused migrants left their homelands, risking their lives in the hope of finding peace, justice, and prosperity in Europe. Instead, many encountered social stigmatization, detention and/or discrimination for being illegal migrants, for lacking resources and for simply being Muslim. This research will examine how Islamic extremist groups are exploiting the disenfranchisement of these migrant populations and using populist messaging on social media to influence them towards violent extremism. ISIS, in particular, formulates specific encoded messages for newly-arriving Muslims in Europe, preying upon their vulnerability. Violence is posited, as a populist response, to the tyranny of European oppression. This research will analyze the factors and indicators which propel Muslim migrants along the spectrum from resilience to violence extremism. Expected outcomes are identification of factors which influence vulnerability towards violent extremism; an early-warning detection framework; predictive analysis models; and de-radicalization frameworks. This research will provide valuable tools (practical and policy level) for European governments, security stakeholders, communities, policy-makers, and educators; it is anticipated to contribute to a de-escalation of Islamic extremism globally.

Keywords: populism, radicalization, de-radicalization, social media, ISIS, Taliban, shariah, jihad, Islam, Europe, political communication, terrorism, migrants, refugees, extremism, global terror, predictive analysis, early warning detection, models, strategic communication, populist narratives, Islamic extremism

Procedia PDF Downloads 119
472 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 143
471 The Role of the Board of Directors and Chief Executive Officers in Leading and Embedding Corporate Social Responsibility within Corporate Governance Regulations

Authors: Khalid Alshaikh

Abstract:

In recent years, leadership, Corporate Governance (CG) and Corporate Social Responsibility (CSR) have been under scrutiny in the Libyan society. Scholars and institutions have commenced investigating the possible resolutions they can arrange to alleviate the economic, social and environmental problems the war has produced. Thus far, these constructs requisite an in-depth reinvestigation, reconceptualization, and analysis to clearly reconstruct their rules and regulations. With the demise of Qaddafi’s regime, levels, degrees, and efforts to apply CG regulations have varied in public and private commercial banks. CSR is a new organizational culture that still designs its route within these financial institutions. Detaching itself from any notion of dictatorship and autocratic traits, leadership counts on transformational and transactional styles. Therefore, this paper investigates the extent to which the Board of Directors and Chief Executive Officers (CEOs) redefine these concepts and how they entrench CSR within the framework of CG. The research methodology used both public and private banks as a case study and qualitative research to interview ten Board of Directors (BoDs) and eleven Chief executive managers to explore how leadership, CG, and CSR are defined and how leadership integrates CSR into CG structures. The findings suggest that the CG framework in Libya still requires great efforts to be developed. Full CG code implementation appears daunting. Also, the CSR is still influenced by the power of religion. Nevertheless, the Islamic perspective is more consistent with the social contract concept of the CSR. The Libyan commercial banks do not solely focus on the economic side of maximizing profits, but also concentrate on its morality. The issue is that CSR activities are not enough to achieve good charity publicly and needs strategies to address major social issues. Moreover, leadership is more transformational and transactional and endeavors to make economic, social and environmental changes, but these changes are curtailed by tradition and traditional values dominating the Libyan social life where religious and tribal practices establish the relationship between leaders and their subordinates. Finally, the findings reveal that transformational and transactional leadership styles encourage the incorporation of CSR into the CG regulations. The boardroom and executive management have such a particular role in flagging up how embedded corporate Social responsibility is in organizational culture across the commercial banks, yet it is still important that the BoDs and CEOs need to do much more to embed corporate social responsibility through their core functions. They need to boost their standing to be more influential and make sure that the right discussions about CSR happen with the right stakeholders involved.

Keywords: board of directors, chief executive officers, corporate governance, corporate social responsibility

Procedia PDF Downloads 170
470 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience

Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore

Abstract:

INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.

Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)

Procedia PDF Downloads 71
469 Effect of Pre-bonding Storage Period on Laser-treated Al Surfaces

Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig

Abstract:

In recent years, the use of aluminium has further expanded and is expected to replace steel in the future as vehicles become lighter and more recyclable in order to reduce greenhouse gas (GHG) emissions and improve fuel economy. In line with this, structures and components are becoming increasingly multi-material, with different materials, including aluminium, being used in combination to improve mechanical utility and performance. A common method of assembling dissimilar materials is mechanical fastening, but it has several drawbacks, such as increased manufacturing processes and the influence of substrate-specific mechanical properties. Adhesive bonding and fusion bonding are methods that overcome the above disadvantages. In these two joining methods, surface pre-treatment of the substrate is always necessary to ensure the strength and durability of the joint. Previous studies have shown that laser surface treatment improves the strength and durability of the joint. Yan et al. showed that laser surface treatment of aluminium alloys changes α-Al2O3 in the oxide layer to γ-Al2O3. As γ-Al2O3 has a large specific surface area, is very porous and chemically active, laser-treated aluminium surfaces are expected to undergo physico-chemical changes over time and adsorb moisture and organic substances from the air or storage atmosphere. The impurities accumulated on the laser-treated surface may be released at the adhesive and bonding interface by the heat input to the bonding system during the joining phase, affecting the strength and durability of the joint. However, only a few studies have discussed the effect of such storage periods on laser-treated surfaces. This paper, therefore, investigates the ageing of laser-treated aluminium alloy surfaces through thermal analysis, electrochemical analysis and microstructural observations.AlMg3 of 0.5 mm and 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fibre laser at 1060 nm wavelength, 70 W maximum power and 55 kHz repetition frequency. The aluminium surface was then analysed using SEM, thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and cyclic voltammetry (CV) after storage in air for various periods ranging from one day to several months TGA and FTIR analysed impurities adsorbed on the aluminium surface, while CV revealed changes in the true electrochemically active surface area. SEM also revealed visual changes on the treated surface. In summary, the changes in the laser-treated aluminium surface with storage time were investigated, and the final results were used to determine the appropriate storage period.

Keywords: laser surface treatment, pre-treatment, adhesion, bonding, corrosion, durability, dissimilar material interface, automotive, aluminium alloys

Procedia PDF Downloads 80
468 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 129
467 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 102
466 A Conceptual Model of the Factors Affecting Saudi Citizens' Use of Social Media to Communicate with the Government

Authors: Reemiah Alotaibi, Muthu Ramachandran, Ah-Lian Kor, Amin Hosseinian-Far

Abstract:

In the past decade, developers of Web 2.0 technologies have shown increasing interest in the topic of e-government. There has been a rapid growth in social media technology because of its significant role in backing up some essential social needs. Its importance and power is derived from its capacity to support two-way communication. Governments are curious to get engaged in these websites, hoping to benefit from the new forms of communication and interaction offered by such technology. Greater participation by the public can be viewed as a chief indicator of effective government communication. Yet, the level of public participation in government 2.0 is not quite satisfactory. In general, it is still at the early stage in most developing countries, including Saudi Arabia. Although it is a fact that Saudi people are among the most active in using social media, the number of people who use social media to communicate with the public institutions is not high. Furthermore, most of the governmental organisations are not using social media tools to communicate with the public. They use these platforms to disseminate information. Our study focuses on the factors affecting citizens’ adoption of social media in Saudi Arabia. Our research question is: what are the factors affecting Saudi citizens’ use of social media to communicate with the government? To answer this research question, the research aims to validate the UTAUT model for examining social media tools from the citizen perspective. An amendment will be proposed to fit the adoption of social media platforms as a communication channel in government by using a developed conceptual model which integrates constructs from the UTAUT model and others external variables based on the literature review. The set of potential factors that affect these citizens' decisions to adopt social media to communicate with their government has been identified as perceived encouragement, trust and cultural influence. The connection between the above-mentioned constructs from the basis for the research hypothesis will be examined in the light of a quantitative methodology. Data collection will be performed through a survey targeting a number of Saudi citizens who are social media users. The data collected from the primary survey will later be analysed by using statistical methods. The outcomes of this research project are argued to have potential contributions to the fields of social media and e-Government adoption, both on the theoretical and practical levels. It is believed that this research project is the first of its type that attempts to identify the factors that affect citizens’ adoption of social media to communicate with the government. The importance of identifying these factors stems from the potential use of them to enhance the government’s implementation of social media and help in making more accurate decisions and strategies based on comprehending the most important factors that affect citizens’ decisions.

Keywords: social media, adoption, citizen, UTAUT model

Procedia PDF Downloads 418
465 Smart Contracts: Bridging the Divide Between Code and Law

Authors: Abeeb Abiodun Bakare

Abstract:

The advent of blockchain technology has birthed a revolutionary innovation: smart contracts. These self-executing contracts, encoded within the immutable ledger of a blockchain, hold the potential to transform the landscape of traditional contractual agreements. This research paper embarks on a comprehensive exploration of the legal implications surrounding smart contracts, delving into their enforceability and their profound impact on traditional contract law. The first section of this paper delves into the foundational principles of smart contracts, elucidating their underlying mechanisms and technological intricacies. By harnessing the power of blockchain technology, smart contracts automate the execution of contractual terms, eliminating the need for intermediaries and enhancing efficiency in commercial transactions. However, this technological marvel raises fundamental questions regarding legal enforceability and compliance with traditional legal frameworks. Moving beyond the realm of technology, the paper proceeds to analyze the legal validity of smart contracts within the context of traditional contract law. Drawing upon established legal principles, such as offer, acceptance, and consideration, we examine the extent to which smart contracts satisfy the requirements for forming a legally binding agreement. Furthermore, we explore the challenges posed by jurisdictional issues as smart contracts transcend physical boundaries and operate within a decentralized network. Central to this analysis is the examination of the role of arbitration and dispute resolution mechanisms in the context of smart contracts. While smart contracts offer unparalleled efficiency and transparency in executing contractual terms, disputes inevitably arise, necessitating mechanisms for resolution. We investigate the feasibility of integrating arbitration clauses within smart contracts, exploring the potential for decentralized arbitration platforms to streamline dispute resolution processes. Moreover, this paper explores the implications of smart contracts for traditional legal intermediaries, such as lawyers and judges. As smart contracts automate the execution of contractual terms, the role of legal professionals in contract drafting and interpretation may undergo significant transformation. We assess the implications of this paradigm shift for legal practice and the broader legal profession. In conclusion, this research paper provides a comprehensive analysis of the legal implications surrounding smart contracts, illuminating the intricate interplay between code and law. While smart contracts offer unprecedented efficiency and transparency in commercial transactions, their legal validity remains subject to scrutiny within traditional legal frameworks. By navigating the complex landscape of smart contract law, we aim to provide insights into the transformative potential of this groundbreaking technology.

Keywords: smart-contracts, law, blockchain, legal, technology

Procedia PDF Downloads 45
464 Enhanced Thermal and Electrical Properties of Terbium Manganate-Polyvinyl Alcohol Nanocomposite Film

Authors: Monalisa Halder, Amit K. Das, Ajit K. Meikap

Abstract:

Polymer nanocomposites are very significant materials both in academia and industry for diverse potential applicability in electronics. Polymer plays the role of matrix element which has low density, flexibility, good mechanical strength and electrical properties. Use of nanosized multiferroic filler in the polymer matrix is suitable to achieve nanocomposites with enhanced magneto-dielectric effect and good mechanical properties both at the same time. Multiferroic terbium manganate (TbMnO₃) nanoparticles have been synthesized by sol-gel method using chloride precursors. Terbium manganate-polyvinyl alcohol (TbMnO₃-PVA) nanocomposite film has been prepared by solution casting method. Crystallite size of TbMnO₃ nanoparticle has been calculated to be ~ 40 nm from XRD analysis. Morphological study of the samples has been done by scanning electron microscopy and a well dispersion of the nanoparticles in the PVA matrix has been found. Thermogravimetric analysis (TGA) exhibits enhancement of thermal stability of the nanocomposite film with the inclusion of TbMnO₃ nanofiller in PVA matrix. The electrical transport properties of the nanocomposite film sample have been studied in the frequency range 20Hz - 2MHz at and above room temperature. The frequency dependent variation of ac conductivity follows universal dielectric response (UDR) obeying Jhonscher’s sublinear power law. Correlated barrier hopping (CBH) mechanism is the dominant charge transport mechanism with maximum barrier height 19 meV above room temperature. The variation of dielectric constant of the sample with frequency has been studied at different temperatures. Real part of dielectric constant at 1 KHz frequency at room temperature of the sample is found to be ~ 8 which is higher than that of the pure PVA film sample (~ 6). Dielectric constant decreases with the increase in frequency. Relaxation peaks have been observed in the variation of imaginary part of electric modulus with frequency. The relaxation peaks shift towards higher frequency as temperature increases probably due to the existence of interfacial polarization in the sample in presence of applied electric field. The current-voltage (I-V) characteristics of the nanocomposite film have been studied under ±40 V applied at different temperatures. I-V characteristic exhibits temperature dependent rectifying nature indicating the formation of Schottky barrier diode (SBD) with barrier height 23 meV. In conclusion, using multiferroic TbMnO₃ nanofiller in PVA matrix, enhanced thermal stability and electrical properties can be achieved.

Keywords: correlated barrier hopping, nanocomposite, schottky diode, TbMnO₃, TGA

Procedia PDF Downloads 127
463 Compressed Natural Gas (CNG) Injector Research for Dual Fuel Engine

Authors: Adam Majczak, Grzegorz Barański, Marcin Szlachetka

Abstract:

Environmental considerations necessitate the search for new energy sources. One of the available solutions is a partial replacement of diesel fuel by compressed natural gas (CNG) in the compression ignition engines. This type of the engines is used mainly in vans and trucks. These units are also gaining more and more popularity in the passenger car market. In Europe, this part of the market share reaches 50%. Diesel engines are also used in industry in such vehicles as ship or locomotives. Diesel engines have higher emissions of nitrogen oxides in comparison to spark ignition engines. This can be currently limited by optimizing the combustion process and the use of additional systems such as exhaust gas recirculation or AdBlue technology. As a result of the combustion process of diesel fuel also particulate matter (PM) that are harmful to the human health are emitted. Their emission is limited by the use of a particulate filter. One of the method for toxic components emission reduction may be the use of liquid gas fuel such as propane and butane (LPG) or compressed natural gas (CNG). In addition to the environmental aspects, there are also economic reasons for the use of gaseous fuels to power diesel engines. A total or partial replacement of diesel gas is possible. Depending on the used technology and the percentage of diesel fuel replacement, it is possible to reduce the content of nitrogen oxides in the exhaust gas even by 30%, particulate matter (PM) by 95 % carbon monoxide and by 20%, in relation to original diesel fuel. The research object is prototype gas injector designed for direct injection of compressed natural gas (CNG) in compression ignition engines. The construction of the injector allows for it positioning in the glow plug socket, so that the gas is injected directly into the combustion chamber. The cycle analysis of the four-cylinder Andoria ADCR engine with a capacity of 2.6 dm3 for different crankshaft rotational speeds allowed to determine the necessary time for fuel injection. Because of that, it was possible to determine the required mass flow rate of the injector, for replacing as much of the original fuel by gaseous fuel. To ensure a high value of flow inside the injector, supply pressure equal to 1 MPa was applied. High gas supply pressure requires high value of valve opening forces. For this purpose, an injector with hydraulic control system, using a liquid under pressure for the opening process was designed. On the basis of air pressure measurements in the flow line after the injector, the analysis of opening and closing of the valve was made. Measurements of outflow mass of the injector were also carried out. The results showed that the designed injector meets the requirements necessary to supply ADCR engine by the CNG fuel.

Keywords: CNG, diesel engine, gas flow, gas injector

Procedia PDF Downloads 493
462 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 174
461 A Comparative Study on the Use of Learning Resources in Learning Biochemistry by MBBS Students at Ras Al Khaimah Medical and Health Sciences University, UAE

Authors: B. K. Manjunatha Goud, Aruna Chanu Oinam

Abstract:

The undergraduate medical curriculum is oriented towards training the students to undertake the responsibilities of a physician. During the training period, adequate emphasis is placed on inculcating logical and scientific habits of thought; clarity of expression and independence of judgment; and ability to collect and analyze information and to correlate them. At Ras Al Khaimah Medical and Health Sciences University (RAKMHSU), Biochemistry a basic medical science subject is taught in the 1st year of 5 years medical course with vertical interdisciplinary interaction with all subjects, which needs to be taught and learned adequately by the students to be related to clinical case or clinical problem in medicine and future diagnostics so that they can practice confidently and skillfully in the community. Based on these facts study was done to know the extent of usage of library resources by the students and the impact of study materials on their preparation for examination. It was a comparative cross sectional study included 100 and 80 1st and 2nd-year students who had successfully completed Biochemistry course. The purpose of the study was explained to all students [participants]. Information was collected on a pre-designed, pre-tested and self-administered questionnaire. The questionnaire was validated by the senior faculties and pre tested on students who were not involved in the study. The study results showed that 80.30% and 93.15% of 1st and 2nd year students have the clear idea of course outline given in course handout or study guide. We also found a statistically significant number of students agreed that they were benefited from the practical session and writing notes in the class hour. A high percentage of students [50% and 62.02%] disagreed that that reading only the handouts is enough for their examination as compared to other students. The study also showed that only 35% and 41% of students visited the library on daily basis for the learning process, around 65% of students were using lecture notes and text books as a tool for learning and to understand the subject and 45% and 53% of students used the library resources (recommended text books) compared to online sources before the examinations. The results presented here show that students perceived that e-learning resources like power point presentations along with text book reading using SQ4R technique had made a positive impact on various aspects of their learning in Biochemistry. The use of library by students has overall positive impact on learning process especially in medical field enhances the outcome, and medical students are better equipped to treat the patient. But it’s also true that use of library use has been in decline which will impact the knowledge aspects and outcome. In conclusion, a student has to be taught how to use the library as learning tool apart from lecture handouts.

Keywords: medical education, learning resources, study guide, biochemistry

Procedia PDF Downloads 178
460 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System

Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li

Abstract:

The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.

Keywords: afterburner, combustion, field synergy, solid oxide fuel cell

Procedia PDF Downloads 135
459 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 203
458 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 176
457 Theory of Apokatástasis - „in This Way, While Paying Attention to Their Knowledge and Wisdom, Nonetheless, They Did Not Ask God about These Matters, as to Whether or Not They Are True...“

Authors: Pikria Vardosanidze

Abstract:

The term Apokatástasis (Greek: Apokatástasis) is Greek and means "re-establishment", the universal resurrection. The term dates back to ancient times, in Stoic thought denoting the end of a constantly evolving cycle of the universe and the beginning of a new beginning, established in Christendom by the Eastern Fathers and Origen as the return of the entire created world to a state of goodness. "Universal resurrection" means the resurrection of mankind after the second coming of Jesus Christ. The first thing the Savior will do immediately upon His glorious coming will be that "the dead will be raised up first by Christ." God's animal action will apply to all the dead, but not with the same result. The action of God also applies to the living, which is accomplished by changing their bodies. The degree of glorification of the resurrected body will be commensurate with the spiritual life. An unclean body will not be glorified, and the soul will not be happy. He, as a resurrected body, will be unbelieving, strong, and spiritual, but because of the action of the passions, all this will only bring suffering to the body. The court judges both the soul and the flesh. At the same time, St. The letter nowhere says that at the last 4trial, someone will be able to change their own position. In connection with this dogmatic teaching, one of the greatest fathers of the Church, Sts. Gregory Nossell had a different view. He points out that the miracle of the resurrection is so glorious and sublime that it exceeds our faith. There are two important circumstances: one is the reality of the resurrection itself, and the other is the face of its fulfillment. The first is founded by Gregory Nossell on the Uado authority, Sts. In the letter: Jesus Christ preached about the resurrection of Christ and also foretold many other events, all of which were later fulfilled. Gregory Nossell clarifies the issues of the substantiality of good and evil and the relationship between them and notes that only good has an inherent dependence on nothing because it originated from nothing and exists eternally in God. As for evil, it has no self-sustaining substance and, therefore, no existence. It appears only through the free will of man from time to time. As St., The Father says that God is the supreme goodness that gives beings the power to exist in existence , all others who are without Him are non-existent. St. The above-mentioned opinion of the father about the universal apocatastasis comes from the thought of Origen. This teaching was introduced by the resolution of the Fifth World Ecclesiastical Assembly. Finally, it was unanimously stated by ecclesiastical figures that the doctrine of universal salvation is not valid. For if the resurrection takes place in this way, that is, all beings, including the evil spirit, are resurrected, then the worldly controversy between good and evil, the future common denominator, the eternal torment - all that Christian dogma acknowledges.

Keywords: apolatastasisi ortodox, orthodox doctrine, gregogory of nusse, eschatology

Procedia PDF Downloads 111
456 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 104
455 A Comparative Case Study of Institutional Work in Public Sector Organizations: Creating Knowledge Management Practice

Authors: Dyah Adi Sriwahyuni

Abstract:

Institutional work has become a prominent and contemporary institutional theory perspective in organization studies. A wealth of studies in organizations have explored actor activities in creating, maintaining, and disrupting institutions at the field level. However, the exploration of the work of actors in creating new management practices at the organizational level has been somewhat limited. The current institutional work literature mostly describes the work of actors at the field level and ignores organizational actors who work to realize management practices. Organizational actors here are defined as actors in organizations who work to institutionalize a particular management practice within the organizations. The extant literature has also generalized the types of management practices, which meant overlooking the unique characteristics of each management fashion as well as a management practice. To fill these gaps, this study aims to provide empirical evidence so as to contribute theoretically to institutional work through a comparative case study on organizational actors’ creation of knowledge management (KM) practice in two public sector organizations in Indonesia. KM is a contemporary management practice employed to manage individual and organizational knowledge in order to improve organizational performance. This practice presents a suitable practical setting with which to provide a rich understanding of the organizational actors’ institutional work and their connection with technology. Drawing on and extending the work of Perkmann and Spicer (2008), this study explores the forms of institutional work performed by organizational actors, including their motivation, skills, challenges, and opportunities. The primary data collection is semi-structured interviews with knowledgeable actors and document analysis for validity and triangulation. Following Eisenhardt's cross-case patterns, the researcher analyzed the collected data focusing on within-group similarities and intergroup differences. The researcher coded interview data using NVivo and used documents to corroborate the findings. The study’s findings add to the wealth of institutional theory literature in organization studies, particularly institutional work related to management practices. This study builds a theory about the work of organizational actors in creating knowledge management practices. Using the perspective of institutional work, research can show the roles of the various actors involved, their practices, and their relationship to technology (materiality), not only focusing on actors with a power which has been the theorizing of institutional entrepreneurship. The development of knowledge management practices in the Indonesian public sector is also a significant additional contribution, given that the current KM literature is dominated by conceptualizing the KM framework and the impact of KM on organizations. The public sector, which is the research setting, also provides important lessons on how actors in a highly institutionalized context are creating an institution, in this case, a knowledge management practice.

Keywords: institutional work, knowledge management, case study, public sector organizations

Procedia PDF Downloads 117
454 Technology Changing Senior Care

Authors: John Kosmeh

Abstract:

Introduction – For years, senior health care and skilled nursing facilities have been plagued with the dilemma of not having the necessary tools and equipment to adequately care for senior residents in their communities. This has led to high transport rates to emergency departments and high 30-day readmission rates, costing billions of unnecessary dollars each year, as well as quality assurance issues. Our Senior care telemedicine program is designed to solve this issue. Methods – We conducted a 1-year pilot program using our technology coupled with our 24/7 telemedicine program with skilled nursing facilities in different parts of the United States. We then compared transports rates and 30-day readmission rates to previous years before the use of our program, as well as transport rates of other communities of similar size not using our program. This data was able to give us a clear and concise look at the success rate of reducing unnecessary transport and readmissions as well as cost savings. Results – A 94% reduction nationally of unnecessary out-of-facility transports, and to date, complete elimination of 30-day readmissions. Our virtual platform allowed us to instruct facility staff on the utilization of our tools and system as well as deliver treatment by our ER-trained providers. Delay waiting for PCP callbacks was eliminated. We were able to obtain lung, heart, and abdominal ultrasound imaging, 12 lead EKG, blood labs, auscultate lung and heart sounds, and collect other diagnostic tests at the bedside within minutes, providing immediate care and allowing us to treat residents within the SNF. Are virtual capabilities allowed for loved ones, family members, and others who had medical power of attorney to virtually connect with us at the time of visit, to speak directly with the medical provider, providing increased confidence in the decision to treat the resident in-house. The decline in transports and readmissions will greatly reduce governmental cost burdens, as well as fines imposed on SNF for high 30-day readmissions, reduce the cost of Medicare A readmissions, and significantly impact the number of patients visiting overcrowded ERs. Discussion – By utilizing our program, SNF can effectively reduce the number of unnecessary transports of residents, as well as create significant savings from loss of day rates, transportation costs, and high CMS fines. The cost saving is in the thousands monthly, but more importantly, these facilities can create a higher quality of life and medical care for residents by providing definitive care instantly with ER-trained personnel.

Keywords: senior care, long term care, telemedicine, technology, senior care communities

Procedia PDF Downloads 94
453 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 492
452 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 79
451 Unlocking Intergenerational Abortion Stories in Gardiennes By Fanny Cabon

Authors: Lou Gargouri

Abstract:

This paper examines how Fanny Cabon's solo performance, Gardiennes (2018) strategically crafts empathetic witnessing through the artist's vocal and physical embodiment of her female ancestors' testimonies, dramatizing the cyclical inheritance of reproductive trauma across generations. Drawing on affect theory and the concept of ethical co-presence, we argue that Cabon's raw voicing of illegal abortions, miscarriages, and abuse through her shape-shifting presence generates an intimate energy loop with the audience. This affective resonance catalyzes recognition of historical injustices, consecrating each singular experience while building collective solidarity. Central to Cabon's political efficacy is her transparent self-revelation through intimate impersonation, which fosters identification with diverse characters as interconnected subjects rather than objectified others. Her solo form transforms the isolation often associated with women's marginalization into radical inclusion, repositioning them from victims to empowered survivors. Comparative analysis with other contemporary works addressing abortion rights illuminates how Gardiennes subverts the traditional medical and clerical gazes that have long governed women's bodies. Ultimately, we contend Gardiennes models the potential of solo performance to harness empathy as a subversive political force. Cabon's theatrical alchemy circulates the effects of injustice through the ethical co-presence of performer and spectator, forging intersubjective connections that reframe marginalized groups traditionally objectified within dominant structures of patriarchal power. In dramatizing how the act of witnessing another's trauma can generate solidarity and galvanize resistance, Cabon's work demonstrates the role of embodied performance in catalyzing social change through the recuperation of women's voices and lived experiences. This paper thus aims to contribute to the emerging field of feminist solo performance criticism by illuminating how Cabon's innovative dramaturgy bridges the personal and the political. Her strategic mobilization of intimacy, identification, and co-presence offers a model for how the affective dynamics of autobiographical performance can be harnessed to confront gendered oppression and imagine more equitable futures. Gardiennes invites us to consider how the circulation of empathy through ethical spectatorship can foster the collective alliances necessary for advancing the unfinished project of women's liberation.

Keywords: gender and sexuality studies, solo performance, trauma studies, affect theory

Procedia PDF Downloads 65
450 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders

Authors: Brad Stappenbelt

Abstract:

The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.

Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy

Procedia PDF Downloads 146
449 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions

Authors: Mingming Leng

Abstract:

Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.

Keywords: bargaining, game theory, pricing, quality, supply chain

Procedia PDF Downloads 277
448 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems

Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability

Procedia PDF Downloads 423
447 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 108