Search results for: CBS domain
213 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK
Authors: Esther Ekundayo
Abstract:
The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology
Procedia PDF Downloads 96212 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry
Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.Keywords: learning, education, imagining, pedagogy, dialogical teaching
Procedia PDF Downloads 92211 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 133210 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950
Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson
Abstract:
MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3
Procedia PDF Downloads 252209 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models
Procedia PDF Downloads 67208 Facilitating Knowledge Transfer for New Product Development in Portfolio Entrepreneurship: A Case Study of a Sodium-Ion Battery Start-up in China
Authors: Guohong Wang, Hao Huang, Rui Xing, Liyan Tang, Yu Wang
Abstract:
Start-ups are consistently under pressure to overcome liabilities of newness and smallness. They must focus on assembling resource and engaging constant renewal and repeated entrepreneurial activities to survive and grow. As an important form of resource, knowledge is constantly vital to start-ups, which will help start-ups with developing new product in hence forming competitive advantage. However, significant knowledge is usually needed to be identified and exploited from external entities, which makes it difficult to achieve knowledge transfer; with limited resources, it can be quite challenging for start-ups balancing the exploration and exploitation of knowledge. The research on knowledge transfer has become a relatively well-developed domain by indicating that knowledge transfer can be achieved through plenty of patterns, yet it is still under-explored that what processes and organizational practices help start-ups facilitating knowledge transfer for new product in the context portfolio entrepreneurship. Resource orchestration theory emphasizes the initiative and active management of company or the manager to explain the fulfillment of resource utility, which will help understand the process of managing knowledge as a certain kind of resource in start-ups. Drawing on the resource orchestration theory, this research aims to explore how knowledge transfer can be facilitated through resource orchestration. A qualitative single-case study of a sodium-ion battery new venture was conducted. The case company is sampled deliberately from representative industrial agglomeration areas in Liaoning Province, China. It is found that distinctive resource orchestration sub-processes are leveraged to facilitate knowledge transfer: (i) resource structuring makes knowledge available across the portfolio; (ii) resource bundling makes combines internal and external knowledge to form new knowledge; and (iii) resource harmonizing balances specific knowledge configurations across the portfolio. Meanwhile, by purposefully reallocating knowledge configurations to new product development in a certain new venture (exploration) and gradually adjusting knowledge configurations to being applied to existing products across the portfolio (exploitation), resource orchestration processes as a whole make exploration and exploitation of knowledge balanced. This study contributes to the knowledge management literature through proposing a resource orchestration view and depicting how knowledge transfer can be facilitated through different resource orchestration processes and mechanisms. In addition, by revealing the balancing process of exploration and exploitation of knowledge, and laying stress on the significance of the idea of making exploration and exploitation of knowledge balanced in the context of portfolio entrepreneurship, this study also adds specific efforts to entrepreneurship and strategy management literature.Keywords: exploration and exploitation, knowledge transfer, new product development, portfolio entrepreneur, resource orchestration
Procedia PDF Downloads 125207 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 167206 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator
Authors: Siva K. Bathina, Sudheer Siddapureddy
Abstract:
Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis
Procedia PDF Downloads 196205 Collaborative Governance in Dutch Flood Risk Management: An Historical Analysis
Authors: Emma Avoyan
Abstract:
The safety standards for flood protection in the Netherlands have been revised recently. It is expected that all major flood-protection structures will have to be reinforced to meet the new standards. The Dutch Flood Protection Programme aims at accomplishing this task through innovative integrated projects such as construction of multi-functional flood defenses. In these projects, flood safety purposes will be combined with spatial planning, nature development, emergency management or other sectoral objectives. Therefore, implementation of dike reinforcement projects requires early involvement and collaboration between public and private sectors, different governmental actors and agencies. The development and implementation of such integrated projects has been an issue in Dutch flood risk management since long. Therefore, this article analyses how cross-sector collaboration within flood risk governance in the Netherlands has evolved over time, and how this development can be explained. The integrative framework for collaborative governance is applied as an analytical tool to map external factors framing possibilities as well as constraints for cross-sector collaboration in Dutch flood risk domain. Supported by an extensive document and literature analysis, the paper offers insights on how the system context and different drivers changing over time either promoted or hindered cross-sector collaboration between flood protection sector, urban development, nature conservation or any other sector involved in flood risk governance. The system context refers to the multi-layered and interrelated suite of conditions that influence the formation and performance of complex governance systems, such as collaborative governance regimes, whereas the drivers initiate and enable the overall process of collaboration. In addition, by applying a method of process tracing we identify a causal and chronological chain of events shaping cross-sectoral interaction in Dutch flood risk management. Our results indicate that in order to evaluate the performance of complex governance systems, it is important to firstly study the system context that shapes it. Clear understanding of the system conditions and drivers for collaboration gives insight into the possibilities of and constraints for effective performance of complex governance systems. The performance of the governance system is affected by the system conditions, while at the same time the governance system can also change the system conditions. Our results show that the sequence of changes within the system conditions and drivers over time affect how cross-sector interaction in Dutch flood risk governance system happens now. Moreover, we have traced the potential of this governance system to shape and change the system context.Keywords: collaborative governance, cross-sector interaction, flood risk management, the Netherlands
Procedia PDF Downloads 130204 Gait Analysis in Total Knee Arthroplasty
Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt
Abstract:
Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery
Procedia PDF Downloads 93203 Efficacy and Safety of COVID-19 Vaccination in Patients with Multiple Sclerosis: Looking Forward to Post-COVID-19
Authors: Achiron Anat, Mathilda Mandel, Mayust Sue, Achiron Reuven, Gurevich Michael
Abstract:
Introduction: As coronavirus disease 2019 (COVID-19) vaccination is currently spreading around the world, it is of importance to assess the ability of multiple sclerosis (MS) patients to mount an appropriate immune response to the vaccine in the context of disease-modifying treatments (DMT’s). Objectives: Evaluate immunity generated following COVID-19 vaccination in MS patients, and assess factors contributing to protective humoral and cellular immune responses in MS patients vaccinated against severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus infection. Methods: Review our recent data related to (1) the safety of PfizerBNT162b2 COVID-19 mRNA vaccine in adult MS patients; (2) the humoral post-vaccination SARS-CoV2 IgG response in MS vaccinees using anti-spike protein-based serology; and (3) the cellular immune response of memory B-cells specific for SARS-CoV-2 receptor-binding domain (RBD) and memory T-cells secreting IFN-g and/or IL-2 in response to SARS-CoV2 peptides using ELISpot/Fluorospot assays in MS patients either untreated or under treatment with fingolimod, cladribine, or ocrelizumab; (4) covariate parameters related to mounting protective immune responses. Results: COVID-19 vaccine proved safe in MS patients, and the adverse event profile was mainly characterised by pain at the injection site, fatigue, and headache. Not any increased risk of relapse activity was noted and the rate of patients with acute relapse was comparable to the relapse rate in non-vaccinated patients during the corresponding follow-up period. A mild increase in the rate of adverse events was noted in younger MS patients, among patients with lower disability, and in patients treated with DMTs. Following COVID-19 vaccination protective humoral immune response was significantly decreased in fingolimod- and ocrelizumab- treated MS patients. SARS-CoV2 specific B-cell and T-cell cellular responses were respectively decreased. Untreated MS patients and patients treated with cladribine demonstrated protective humoral and cellular immune responses, similar to healthy vaccinated subjects. Conclusions: COVID-19 BNT162b2 vaccine proved as safe for MS patients. No increased risk of relapse activity was noted post-vaccination. Although COVID-19 vaccination is new, accumulated data demonstrate differences in immune responses under various DMT’s. This knowledge can help to construct appropriate COVID-19 vaccine guidelines to ensure proper immune responses for MS patients.Keywords: covid-19, vaccination, multiple sclerosis, IgG
Procedia PDF Downloads 139202 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method
Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren
Abstract:
In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.Keywords: floating body, fluid structure interaction, MPS, particle method, waves
Procedia PDF Downloads 75201 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 571200 Astronomy in the Education Area: A Narrative Review
Authors: Isabella Lima Leite de Freitas
Abstract:
The importance of astronomy for humanity is unquestionable. Despite being a robust science, capable of bringing new discoveries every day and quickly increasing the ability of researchers to understand the universe more deeply, scientific research in this area can also help in various applications outside the domain of astronomy. The objective of this study was to review and conduct a descriptive analysis of published studies that presented the importance of astronomy in the area of education. A narrative review of the literature has been performed, considering the articles published in the last five years. As astronomy involves the study of physics, chemistry, biology, mathematics and technology, one of the studies evaluated presented astronomy as the gateway to science, demonstrating the presence of astronomy in 52 school curricula in 37 countries, with celestial movement the dominant content area. Another intervention study, evaluating individuals aged 4-5 years, demonstrated that the attribution of personal characteristics to cosmic bodies, in addition to the use of comprehensive astronomy concepts, favored the learning of science in preschool-age children, considering the use of practical activities of accompaniment and free drawing. Aiming to measure scientific literacy, another study developed in Turkey, motivated the authorities of this country to change the teaching materials and curriculum of secondary schools after the term “astronomy” appeared as one of the most attractive subjects for young people aged 15 to 24. There are also reports in the literature of the use of pedagogical tools, such as the representation of the Solar System on a human scale, where students can walk along the orbits of the planets while studying the laws of dynamics. The use of this tool favored the teaching of the relationship between distance, duration and speed over the period of the planets, in addition to improving the motivation and well-being of students aged 14-16. An important impact of astronomy on education was demonstrated in the study that evaluated the participation of high school students in the Astronomical Olympiads and the International Astronomy Olympiad. The study concluded that these Olympics have considerable influence on students who pursue a career in teaching or research later on, many of whom are in the area of astronomy itself. In addition, the literature indicates that the teaching of astronomy in the digital age has facilitated the availability of data for researchers, but also for the general population. This fact can increase even more the curiosity that the astronomy area has always instilled in people and promote the dissemination of knowledge on an expanded scale. Currently, astronomy has been considered an important ally in strengthening the school curricula of children, adolescents and young adults. This has been used as teaching tools, in addition to being extremely useful for scientific literacy, being increasingly used in the area of education.Keywords: astronomy, education area, teaching, review
Procedia PDF Downloads 103199 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector
Authors: Aron Witkowski, Andrzej Wodecki
Abstract:
Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing
Procedia PDF Downloads 49198 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations
Authors: Rosni Lakandri
Abstract:
With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.Keywords: China, foreign policy, India, media, public opinion
Procedia PDF Downloads 151197 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 65196 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems
Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick
Abstract:
This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms
Procedia PDF Downloads 231195 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare
Authors: Piret Pernik
Abstract:
Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts
Procedia PDF Downloads 102194 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 132193 Analysis of Feminist Translation in Subtitling from Arabic into English: A Case Study
Authors: Ghada Ahmed
Abstract:
Feminist translation is one of the strategies adopted in the field of translation studies when a gendered content is being rendered from one language to another, and this strategy has been examined in previous studies on written texts. This research, however, addresses the practice of feminist translation in audiovisual texts that are concerned with the screen, dialogue, image and visual aspects. In this thesis, the objectives are studying feminist translation and its adaptation in subtitling from Arabic into English. It addresses the connections between gender and translation as one domain and feminist translation practices with particular consideration of feminist translation strategies in English subtitles. It examines the visibility of the translator throughout the process, assuming that feminist translation is a product directed by the translator’s feminist position, culture, and ideology as a means of helping unshadow women. It also discusses how subtitling constraints impact feminist translation and how the image that has a narrative value can be integrated into the content of the English subtitles. The reasons for conducting this research project are to study language sexism in English and look into Arabic into English gendered content, taking into consideration the Arabic cultural concepts that may lose their connotations when they are translated into English. This research is also analysing the image in an audiovisual text and its contribution to the written dialogue in subtitling. Thus, this research attempts to answer the following questions: To what extent is there a form of affinity between a gendered content and translation? Is feminist translation an act of merely working on a feminist text or feminising the language of any text, by incorporating the translator’s ideology? How can feminist translation practices be applied in an audiovisual text? How likely is it to adapt feminist translation looking into visual components as well as subtitling constraints? Moreover, the paper searches into the fields of gender and translation; feminist translation, language sexism, media studies, and the gap in the literature related to feminist translation practice in visual texts. For my case study, the "Speed Sisters" film has been chosen so as to analyze its English subtitles for my research. The film is a documentary that was produced in 2015 and directed by Amber Fares. It is about five Palestinian women who try to break the stereotypes about women, and have taken their passion about car-racing forward to be the first all-women car-racing driving team in the Middle East. It tackles the issue of gender in both content and language and this is reflected in the translation. As the research topic is semiotic-channelled, the choice for the theoretical approaches varies and combines between translation studies, audiovisual translation, gender studies, and media studies. Each of which will contribute to understanding a specific field of the research and the results will eventually be integrated to achieve the intended objectives in a way that demonstrates rendering a gendered content in one of the audiovisual translation modes from a language into another.Keywords: audiovisual translation, feminist translation, films gendered content, subtitling conventions and constraints
Procedia PDF Downloads 299192 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 115191 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus
Authors: Shraddha Rane, Richard Warren, Stephen Eyre
Abstract:
L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.Keywords: IL23R, psoriasis, psoriatic arthritis, SNP
Procedia PDF Downloads 168190 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors
Procedia PDF Downloads 407189 Evolution and Merging of Double-Diffusive Layers in a Vertically Stable Compositional Field
Authors: Ila Thakur, Atul Srivastava, Shyamprasad Karagadde
Abstract:
The phenomenon of double-diffusive convection is driven by density gradients created by two different components (e.g., temperature and concentration) having different molecular diffusivities. The evolution of horizontal double-diffusive layers (DDLs) is one of the outcomes of double-diffusive convection occurring in a laterally/vertically cooled rectangular cavity having a pre-existing vertically stable composition field. The present work mainly focuses on different characteristics of the formation and merging of double-diffusive layers by imposing lateral/vertical thermal gradients in a vertically stable compositional field. A CFD-based twodimensional fluent model has been developed for the investigation of the aforesaid phenomena. The configuration containing vertical thermal gradients shows the evolution and merging of DDLs, where, elements from the same horizontal plane move vertically and mix with surroundings, creating a horizontal layer. In the configuration of lateral thermal gradients, a specially oriented convective roll was found inside each DDL and each roll was driven by the competing density change due to the already existing composition field and imposed thermal field. When the thermal boundary layer near the vertical wall penetrates the salinity interface, it can disrupt the compositional interface and can lead to layer merging. Different analytical scales were quantified and compared for both configurations. Various combinations of solutal and thermal Rayleigh numbers were investigated to get three different regimes, namely; stagnant regime, layered regime and unicellular regime. For a particular solutal Rayleigh number, a layered structure can originate only for a range of thermal Rayleigh numbers. Lower thermal Rayleigh numbers correspond to a diffusion-dominated stagnant regime. Very high thermal Rayleigh corresponds to a unicellular regime with high convective mixing. Different plots identifying these three regimes, number, thickness and time of existence of DDLs have been studied and plotted. For a given solutal Rayleigh number, an increase in thermal Rayleigh number increases the width but decreases both the number and time of existence of DDLs in the fluid domain. Sudden peaks in the velocity and heat transfer coefficient have also been observed and discussed at the time of merging. The present study is expected to be useful in correlating the double-diffusive convection in many large-scale applications including oceanography, metallurgy, geology, etc. The model has also been developed for three-dimensional geometry, but the results were quite similar to that of 2-D simulations.Keywords: double diffusive layers, natural convection, Rayleigh number, thermal gradients, compositional gradients
Procedia PDF Downloads 84188 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 100187 Exploring Antifragility Principles in Humanitarian Supply Chain: The key Role of Information Systems
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
The COVID-19 pandemic has been a major and global disruption that has affected all supply chains on a worldwide scale. Consequently, the question posed by this communication is to understand how - in the face of such disruptions - supply chains, including their actors, management tools, and processes, react, survive, adapt, and even improve. To do so, the concepts of resilience and antifragility applied to a supply chain have been leveraged. This article proposes to perceive resilience as a step to surpass in moving towards antifragility. The research objective is to propose an analytical framework to measure and compare resilience and antifragility, with antifragility seen as a property of a system that improves when subjected to disruptions rather than merely resisting these disruptions, as is the case with resilience. A unique case study was studied - MSF logistics (France) - using a qualitative methodology. Semi-structured interviews were conducted in person and remotely in multiple phases: during and immediately after the COVID crisis (8 interviews from March 2020 to April 2021), followed by a new round from September to November 2023. A Delphi method was employed. The interviews were analyzed using coding and a thematic framework. One of the theoretical contributions is consolidating the field of supply chain resilience research by precisely characterizing the dimensions of resilience for a humanitarian supply chain (Reorganization, Collaboration mediated by IS, Humanitarian culture). In this regard, a managerial contribution of this study is providing a guide for managers to identify the four dimensions and sub-dimensions of supply chain resilience. This enables managers to focus their decisions and actions on dimensions that will enhance resilience. Most importantly, another contribution is comparing the concepts of resilience and antifragility and proposing an analytical framework for antifragility—namely, the mechanisms on which MSF logistics relied to capitalize on uncertainties, contingencies, and shocks rather than simply enduring them. For MSF Logistics, antifragility manifested through the ability to identify opportunities hidden behind the uncertainties and shocks of COVID-19, reducing vulnerability, and fostering a culture that encourages innovation and the testing of new ideas. Logistics, particularly in the humanitarian domain, must be able to adapt to environmental disruptions. In this sense, this study identifies and characterizes the dimensions of resilience implemented by humanitarian logistics. Moreover, this research goes beyond the concept of resilience to propose an analytical framework for the concept of antifragility. The organization studied emerged stronger from the COVID-19 crisis due to the mechanisms we identified, allowing us to characterize antifragility. Finally, the results show that the information system plays a key role in antifragility.Keywords: antifragility, humanitarian supply chain, information systems, qualitative research, resilience.
Procedia PDF Downloads 64186 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights
Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy
Abstract:
The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems
Procedia PDF Downloads 73185 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 203184 The Convention of Culture: A Comprehensive Study on Dispute Resolution Pertaining to Heritage and Related Issues
Authors: Bhargavi G. Iyer, Ojaswi Bhagat
Abstract:
In recent years, there has been a lot of discussion about ethnic imbalance and diversity in the international context. Arbitration is now subject to the hegemony of a small number of people who are constantly reappointed. When a court system becomes exclusionary, the quality of adjudication suffers significantly. In such a framework, there is a misalignment between adjudicators' preconceived views and the interests of the parties, resulting in a biased view of the proceedings. The world is currently witnessing a slew of intellectual property battles around cultural appropriation. The term "cultural appropriation" refers to the industrial west's theft of indigenous culture, usually for fashion, aesthetic, or dramatic purposes. Selena Gomez exemplifies cultural appropriation by commercially using the “bindi,” which is sacred to Hinduism, as a fashion symbol. In another case, Victoria's Secret insulted indigenous peoples' genocide by stealing native Indian headdresses. In the case of yoga, a similar process can be witnessed, with Vedic philosophy being reduced to a type of physical practice. Such a viewpoint is problematic since indigenous groups have worked hard for generations to ensure the survival of their culture, and its appropriation by the western world for purely aesthetic and theatrical purposes is upsetting to those who practise such cultures. Because such conflicts involve numerous jurisdictions, they must be resolved through international arbitration. However, these conflicts are already being litigated, and the aggrieved parties, namely developing nations, do not believe it prudent to use the World Intellectual Property Organization's (WIPO) already established arbitration procedure. This practise, it is suggested in this study, is the outcome of Europe's exclusionary arbitral system, which fails to recognise the non-legal and non-commercial nature of indigenous culture issues. This research paper proposes a more comprehensive, inclusive approach that recognises the non-legal and non-commercial aspects of IP disputes involving cultural appropriation, which can only be achieved through an ethnically balanced arbitration structure. This paper also aspires to expound upon the benefits of arbitration and other means of alternative dispute resolution (ADR) in the context of disputes pertaining to cultural issues; positing that inclusivity is a solution to the existing discord between international practices and localised cultural points of dispute. This paper also hopes to explicate measures that will facilitate ensuring inclusion and ideal practices in the domain of arbitration law, particularly pertaining to cultural heritage and indigenous expression.Keywords: arbitration law, cultural appropriation, dispute resolution, heritage, intellectual property
Procedia PDF Downloads 144