Search results for: elliptic curve digital signature algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7376

Search results for: elliptic curve digital signature algorithm

1016 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters

Authors: Satish Kumar Peddapelli

Abstract:

This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have become popular and considerable interest by researcher are given on them. A fast Space-Vector Pulse Width Modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analysed.

Keywords: five-level inverter, space vector pulse wide modulation, diode clamped inverter, electrical engineering

Procedia PDF Downloads 388
1015 Remote Sensing and Geographic Information Systems for Identifying Water Catchments Areas in the Northwest Coast of Egypt for Sustainable Agricultural Development

Authors: Mohamed Aboelghar, Ayman Abou Hadid, Usama Albehairy, Asmaa Khater

Abstract:

Sustainable agricultural development of the desert areas of Egypt under the pressure of irrigation water scarcity is a significant national challenge. Existing water harvesting techniques on the northwest coast of Egypt do not ensure the optimal use of rainfall for agricultural purposes. Basin-scale hydrology potentialities were studied to investigate how available annual rainfall could be used to increase agricultural production. All data related to agricultural production included in the form of geospatial layers. Thematic classification of Sentinal-2 imagery was carried out to produce the land cover and crop maps following the (FAO) system of land cover classification. Contour lines and spot height points were used to create a digital elevation model (DEM). Then, DEM was used to delineate basins, sub-basins, and water outlet points using the Soil and Water Assessment Tool (Arc SWAT). Main soil units of the study area identified from Land Master Plan maps. Climatic data collected from existing official sources. The amount of precipitation, surface water runoff, potential, and actual evapotranspiration for the years (2004 to 2017) shown as results of (Arc SWAT). The land cover map showed that the two tree crops (olive and fig) cover 195.8 km2 when herbaceous crops (barley and wheat) cover 154 km2. The maximum elevation was 250 meters above sea level when the lowest one was 3 meters below sea level. The study area receives a massive variable amount of precipitation; however, water harvesting methods are inappropriate to store water for purposes.

Keywords: water catchements, remote sensing, GIS, sustainable agricultural development

Procedia PDF Downloads 114
1014 Mobile Application Interventions in Positive Psychology: Current Status and Recommendations for Effective App Design

Authors: Gus Salazar, Jeremy Bekker, Lauren Linford, Jared Warren

Abstract:

Positive psychology practices allow for its principles to be applied to all people, regardless of their current level of functioning. To increase the dissemination of these practices, interventions are being adapted for use with digital technology, such as mobile apps. However, the research regarding positive psychology mobile app interventions is still in its infancy. In an effort to facilitate progress in this important area, we 1) conducted a qualitative review to summarize the current state of the positive psychology mobile app literature and 2) developed research-supported recommendations for positive psychology app development to maximize behavior change. In our literature review, we found that while positive psychology apps varied widely in content and purpose, there was a near-complete lack of research supporting their effectiveness. Most apps provided no rationale for the behavioral change techniques (BCTs) they employed in their app, and most did not develop their app with specific theoretical frameworks or design models in mind. Given this problem, we recommended four steps for effective positive psychology app design. First, developers must establish their app in a research-supported theory of change. Second, researchers must select appropriate behavioral change techniques which are consistent with their app’s goals. Third, researchers must leverage effective design principles. These steps will help mobile applications use data-driven methods for encouraging behavior change in their users. Lastly, we discuss directions for future research. In particular, researchers must investigate the effectiveness of various BCTs in positive psychology interventions. Although there is some research on this point, we do not yet clearly understand the mechanisms within the apps that lead to behavior change. Additionally, app developers must also provide data on the effectiveness of their mobile apps. As developers follow these steps for effective app development and as researchers continue to investigate what makes these apps most effective, we will provide millions of people in need with access to research-based mental health resources.

Keywords: behavioral change techniques, mobile app, mobile intervention, positive psychology

Procedia PDF Downloads 224
1013 The Online Power of Values: Adolescents’ Values as Predicting Factors of Their Online Bystanders’ Behavior While Witnessing Cyberbullying

Authors: Sharon Cayzer-Haller, Shir Ginosar-Yaari, Ariel Knafo-Noam

Abstract:

The 21st century emerged as the digital century, and it is marked by a wide range of technological developments and changes, followed by potential changes in human communication skills. This technological revolution has changed human means of communication in many different ways: children and adolescents are spending much of their time in front of screens, participating in all sorts of online activities (even more so since the outbreak of COVID-19). The current study focuses on the role of values in adolescents' online bystanders' behavior. Values are cognitive, abstract representations of desirable goals that motivate behavior, and we hypothesized finding significant associations between specific values and differential online bystanders' feelings and behavior. Data was collected through online questionnaires that measured the participants' values, using Schwartz's short version of the Portrait Values Questionnaire (Schwartz, 2012). Participants’ online behavior was assessed in a questionnaire addressing reactions to situations of cyber shaming and cyberbullying, and specifically positive feelings and pro-social behavior (e.g., more supportive reactions) toward the victims, as opposed to different offensive behavioral reactions (such as laughing at the victim or ignoring the situation). Participants were recruited with a commercial research panel company, and 308 Israeli adolescents' values and online behavior were examined (mean age 15.2). As hypothesized, results show significant associations between self-transcendence values (universalism and benevolence) and conservation values (conformity, tradition, and security). These two groups of values were positively correlated with pro-social bystanders' feelings and behavior. On the opposite side of the values scale, the value of power was negatively associated with the participants' pro-social behavior, and positively associated with offensive behavioral reactions. Further research is needed, but we conclude that values serve as crucial guiding factors in directing adolescents' online feelings and behavior.

Keywords: adolescents, values, cyberbullying, online behavior, power

Procedia PDF Downloads 66
1012 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 115
1011 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 180
1010 Case Study: Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: cash flow optimization, payment plan, procurement management, subcontracting plan

Procedia PDF Downloads 131
1009 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 76
1008 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula

Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed

Abstract:

The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.

Keywords: nomogram, planing hull, principal parameters, regression

Procedia PDF Downloads 405
1007 Frontal Oscillatory Activity and Phase–Amplitude Coupling during Chan Meditation

Authors: Arthur C. Tsai, Chii-Shyang Kuo, Vincent S. C. Chien, Michelle Liou, Philip E. Cheng

Abstract:

Meditation enhances mental abilities and it is an antidote to anxiety. However, very little is known about brain mechanisms and cortico-subcortical interactions underlying meditation-induced anxiety relief. In this study, the changes of phase-amplitude coupling (PAC) in which the amplitude of the beta frequency band were modulated in phase with delta rhythm were investigated after eight-week of meditation training. The study hypothesized that through a concentrate but relaxed mental training the delta-beta coupling in the frontal regions is attenuated. The delta-beta coupling analysis was applied to within and between maximally-independent component sources returned from the extended infomax independent components analysis (ICA) algorithm on the continuous EEG data during mediation. A unique meditative concentration task through relaxing body and mind was used with a constant level of moderate mental effort, so as to approach an ‘emptiness’ meditative state. A pre-test/post-test control group design was used in this study. To evaluate cross-frequency phase-amplitude coupling of component sources, the modulation index (MI) with statistics to calculate circular phase statistics were estimated. Our findings reveal that a significant delta-beta decoupling was observed in a set of frontal regions bilaterally. In addition, beta frequency band of prefrontal component were amplitude modulated in phase with the delta rhythm of medial frontal component.

Keywords: phase-amplitude coupling, ICA, meditation, EEG

Procedia PDF Downloads 427
1006 Optimizing The Residential Design Process Using Automated Technologies

Authors: Martin Georgiev, Milena Nanova, Damyan Damov

Abstract:

Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 52
1005 Non-Invasive Pre-Implantation Genetic Assessment Using NGS in IVF Clinical Routine

Authors: Katalin Gombos, Bence Gálik, Krisztina Ildikó Kalács, Krisztina Gödöny, Ákos Várnagy, József Bódis, Attila Gyenesei, Gábor L. Kovács

Abstract:

Although non-invasive pre-implantation genetic testing for aneuploidy (NIPGT-A) is potentially appropriate to assess chromosomal ploidy of the embryo, practical application of it in a routine IVF center has not been started in the absence of a recommendation. We developed a comprehensive workflow for a clinically applicable strategy for NIPGT-A based on next-generation sequencing (NGS) technology. We performed MALBAC whole genome amplification and NGS on spent blastocyst culture media of Day 3 embryos fertilized with intra-cytoplasmic sperm injection (ICSI). Spent embryonic culture media of morphologically good quality score embryos were enrolled in further analysis with the blank culture media as background control. Chromosomal abnormalities were identified by an optimized bioinformatics pipeline applying a copy number variation (CNV) detecting algorithm. We demonstrate a comprehensive workflow covering both wet- and dry-lab procedures supporting a clinically applicable strategy for NIPGT-A. It can be carried out within 48 h which is critical for the same-cycle blastocyst transfer, but also suitable for “freeze all” and “elective frozen embryo” strategies. The described integrated approach of non-invasive evaluation of embryonic DNA content of the culture media can potentially supplement existing pre-implantation genetic screening methods.

Keywords: next generation sequencing, in vitro fertilization, embryo assessment, non-invasive pre-implantation genetic testing

Procedia PDF Downloads 156
1004 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs

Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino

Abstract:

Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.

Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus

Procedia PDF Downloads 139
1003 Pharmacokinetics and Safety of Pacritinib in Patients with Hepatic Impairment and Healthy Volunteers

Authors: Suliman Al-Fayoumi, Sherri Amberg, Huafeng Zhou, Jack W. Singer, James P. Dean

Abstract:

Pacritinib is an oral kinase inhibitor with specificity for JAK2, FLT3, IRAK1, and CSF1R. In clinical studies, pacritinib was well tolerated with clinical activity in patients with myelofibrosis. The most frequent adverse events (AEs) observed with pacritinib are gastrointestinal (diarrhea, nausea, and vomiting; mostly grade 1-2 in severity) and typically resolve within 2 weeks. A human ADME mass balance study demonstrated that pacritinib is predominantly cleared via hepatic metabolism and biliary excretion (>85% of administered dose). The major hepatic metabolite identified, M1, is not thought to materially contribute to the pharmacological activity of pacritinib. Hepatic diseases are known to impair hepatic blood flow, drug-metabolizing enzymes, and biliary transport systems and may affect drug absorption, disposition, efficacy, and toxicity. This phase 1 study evaluated the pharmacokinetics (PK) and safety of pacritinib and the M1 metabolite in study subjects with mild, moderate, or severe hepatic impairment (HI) and matched healthy subjects with normal liver function to determine if pacritinib dosage adjustments are necessary for patients with varying degrees of hepatic insufficiency. Study participants (aged 18-85 y) were enrolled into 4 groups based on their degree of HI as defined by Child-Pugh Clinical Assessment Score: mild (n=8), moderate (n=8), severe (n=4), and healthy volunteers (n=8) matched for age, BMI, and sex. Individuals with concomitant renal dysfunction or progressive liver disease were excluded. A single 400 mg dose of pacritinib was administered to all participants. Blood samples were obtained for PK evaluation predose and at multiple time points postdose through 168 h. Key PK parameters evaluated included maximum plasma concentration (Cmax), time to Cmax (Tmax), area under the plasma concentration time curve (AUC) from hour zero to last measurable concentration (AUC0-t), AUC extrapolated to infinity (AUC0-∞), and apparent terminal elimination half-life (t1/2). Following treatment, pacritinib was quantifiable for all study participants at 1 h through 168 h postdose. Systemic pacritinib exposure was similar between healthy volunteers and individuals with mild HI. However, there was a significant difference between those with moderate and severe HI and healthy volunteers with respect to peak concentration (Cmax) and plasma exposure (AUC0-t, AUC0-∞). Mean Cmax decreased by 47% and 57% respectively in participants with moderate and severe HI vs matched healthy volunteers. Similarly, mean AUC0-t decreased by 36% and 45% and mean AUC0-∞ decreased by 46% and 48%, respectively in individuals with moderate and severe HI vs healthy volunteers. Mean t1/2 ranged from 51.5 to 74.9 h across all groups. The variability on exposure ranged from 17.8% to 51.8% across all groups. Systemic exposure of M1 was also significantly decreased in study participants with moderate or severe HI vs. healthy participants and individuals with mild HI. These changes were not significantly dissimilar from the inter-patient variability in these parameters observed in healthy volunteers. All AEs were grade 1-2 in severity. Diarrhea and headache were the only AEs reported in >1 participant (n=4 each). Based on these observations, it is unlikely that dosage adjustments would be warranted in patients with mild, moderate, or severe HI treated with pacritinib.

Keywords: pacritinib, myelofibrosis, hepatic impairment, pharmacokinetics

Procedia PDF Downloads 298
1002 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 219
1001 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 116
1000 Interpretations of Disaster: A Comparative Study on Disaster Film Cycles

Authors: Chi-Ying Yu

Abstract:

In real life, the occurrence of disasters is always dreadful and heartbreaking, yet paradoxically, disaster film is a genre that has been popular at periodic intervals in motion picture history. This study attempts to compare the disaster film cycles of the 1970s, 1990s, and the early 21st century. Two research questions are addressed: First, how this genre has responded to the existing conditions of society in different periods in terms of the disaster proposition? Second, how this genre reflects a certain eternal substance of the human mind in light of its lasting appeal? Through cinematic textual analysis and literature review, this study finds that the emergence of disaster films in the 1970s reflected the turmoil in international relations and domestic politics situation in contemporary American society, and cinema screens showed such disaster stories as shipwrecks, air accidents, and skyscraper blazes due to human negligence. The 1990s saw the fervor of millennial apocalypse legends, and the awakening of environmental consciousness, which, together with the rapid advances in digital technology, once again gave rise to a frenzy of disaster films, with natural disasters and threats from aliens as the major themes of disasters. Since the beginning of the 21st century, the 911 Incident and natural disasters around the world have generated a consciousness of imminent crisis. Cinematic images simulated actual disasters, while aesthetic techniques focused on creating a kind of ‘empathetic’ experience in their exploration of the essence of the disaster experience. At the same time, post-apocalypse films that focus on post-disaster reconstruction have become an even more popular theme. Taking the approach of Jungian/post-Jungian film study, this study also reviews and interprets the commonly exhibited subliminal feelings in the disaster films of the three different periods. The imagination of disaster seems to serve as an underlying state of the human mind.

Keywords: disaster film, Jungian/post-Jungian film studies, stimulation, sublime

Procedia PDF Downloads 263
999 Legal Personality and Responsibility of Robots

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Arrival of artificial intelligence or smart robots in the modern world put them in charge on pericise and at risk. So acting human activities with robots makes criminal or civil responsibilities for their acts or behavior. The practical usage of smart robots has entered them in to a unique situation when naturalization happens and smart robots are identifies as members of society. There would be some legal situation by adopting these new smart citizens. The first situation is about legal responsibility of robots. Recognizing the naturalization of robot involves some basic right , so humans have the rights of employment, property, housing, using energy and other human rights may be employed for robots. So how would be the practice of these rights in the society and if some problems happens with these rights, how would the civil responsibility and punishment? May we consider them as population and count on the social programs? The second episode is about the criminal responsibility of robots in important activity instead of human that is the aim of inventing robots with handling works in AI technology , but the problem arises when some accidents are happened by robots who are in charge of important activities like army, surgery, transporting, judgement and so on. Moreover, recognizing independent identification for robots in the legal world by register ID cards, naturalization and civilian rights makes and prepare the same rights and obligations of human. So, the civil responsibility is not avoidable and if the robot commit a crime it would have criminal responsibility and have to be punished. The basic component of criminal responsibility may changes in so situation. For example, if designation for criminal responsibility bounds to human by sane, maturity, voluntariness, it would be for robots by being intelligent, good programming, not being hacked and so on. So it is irrational to punish robots by prisoning , execution and other human punishments for body. We may determine to make digital punishments like changing or repairing programs, exchanging some parts of its body or wreck it down completely. Finally the responsibility of the smart robot creators, programmers, the boss in chief, the organization who employed robot, the government which permitted to use robot in important bases and activities , will be analyzing and investigating in their article.

Keywords: robot, artificial intelligence, personality, responsibility

Procedia PDF Downloads 147
998 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)

Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha

Abstract:

We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.

Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance

Procedia PDF Downloads 92
997 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 304
996 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 246
995 Biodegradable Cross-Linked Composite Hydrogels Enriched with Small Molecule for Osteochondral Regeneration

Authors: Elena I. Oprita, Oana Craciunescu, Rodica Tatia, Teodora Ciucan, Reka Barabas, Orsolya Raduly, Anca Oancea

Abstract:

Healing of osteochondral defects requires repair of the damaged articular cartilage, the underlying subchondral bone and the interface between these tissues (the functional calcified layer). For this purpose, developing a single monophasic scaffold that can regenerate two specific lineages (cartilage and bone) becomes a challenge. The aim of this work was to develop variants of biodegradable cross-linked composite hydrogel based on natural polypeptides (gelatin), polysaccharides components (chondroitin-4-sulphate and hyaluronic acid), in a ratio of 2:0.08:0.02 (w/w/w) and mixed with Si-hydroxyapatite (Si-Hap), in two ratios of 1:1 and 2:1 (w/w). Si-Hap was synthesized and characterized as a better alternative to conventional Hap. Subsequently, both composite hydrogel variants were cross-linked with (N, N-(3-dimethylaminopropyl)-N-ethyl carbodiimide (EDC) and enriched with a small bioactive molecule (icariin). The small molecule icariin (Ica) (C33H40O15) is the main active constituent (flavonoid) of Herba epimedium used in traditional Chinese medicine to cure bone- and cartilage-related disorders. Ica enhances osteogenic and chondrogenic differentiation of bone marrow mesenchymal stem cells (BMSCs), facilitates matrix calcification and increases the specific extracellular matrix (ECM) components synthesis by chondrocytes. Afterward, the composite hydrogels were characterized for their physicochemical properties in terms of the enzymatic biodegradation in the presence of type I collagenase and trypsin, the swelling capacity and the degree of crosslinking (TNBS assay). The cumulative release of Ica and real-time concentration were quantified at predetermined periods of time, according to the standard curve of standard Ica, after hydrogels incubation in saline buffer at physiological parameters. The obtained cross-linked composite hydrogels enriched with small-molecule Ica were also characterized for morphology by scanning electron microscopy (SEM). Their cytocompatibility was evaluated according to EN ISO 10993-5:2009 standard for medical device testing. Thus, analyses regarding cell viability (Live/Dead assay), cell proliferation (Neutral Red assay) and cell adhesion to composite hydrogels (SEM) were performed using NCTC clone L929 cell line. The final results showed that both cross-linked composite hydrogel variants enriched with Ica presented optimal physicochemical, structural and biological properties to be used as a natural scaffold able to repair osteochondral defects. The data did not reveal any toxicity of composite hydrogels in NCTC stabilized cell lines within the tested range of concentrations. Moreover, cells were capable of spreading and proliferating on both composite hydrogel surfaces. In conclusion, the designed biodegradable cross-linked composites enriched with Si and Ica are recommended for further testing as natural temporary scaffolds, which can allow cell migration and synthesis of new extracellular matrix within osteochondral defects.

Keywords: composites, gelatin, osteochondral defect, small molecule

Procedia PDF Downloads 174
994 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 278
993 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 120
992 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 462
991 Smart Water Cities for a Sustainable Future: Defining, Necessity, and Policy Pathways for Canada's Urban Water Resilience

Authors: Sima Saadi, Carolyn Johns

Abstract:

The concept of a "Smart Water City" is emerging as a framework to address critical urban water challenges, integrating technology, data, and sustainable management practices to enhance water quality, conservation, and accessibility. This paper explores the definition of a Smart Water City, examines the pressing need for such cities in Canada, and proposes policy pathways for their development. Smart Water Cities utilize advanced monitoring systems, data analytics, and integrated water resources management to optimize water usage, anticipate and mitigate environmental impacts, and engage citizens in sustainable practices. Global examples from regions such as Europe, Asia, and Australia illustrate how Smart Water City models can transform urban water systems by enhancing resilience, improving resource efficiency, and driving economic development through job creation in environmental technology sectors. For Canada, adopting Smart Water City principles could address pressing challenges, including climate-induced water stress, aging infrastructure, and the need for equitable water access across diverse urban and rural communities. Building on Canada's existing water policies and technological expertise, it propose strategic investments in digital water infrastructure, data-driven governance, and community partnerships. Through case studies, this paper offers insights into how Canadian cities could benefit from cross-sector collaboration, policy development, and funding for smart water technology. By aligning national policy with smart urban water solutions, Canada has the potential to lead globally in sustainable water management, ensuring long-term water security and environmental stewardship for its cities and communities.

Keywords: smart water city, urban water resilience, water management technology, sustainable water infrastructure, canada water policy, smart city initiatives

Procedia PDF Downloads 9
990 Rutin C Improve Osseointegration of Dental Implant and Healing of Soft Tissue

Authors: Noha Mohammed Ismael Awad Eladal, Aala Shoukry Emara

Abstract:

Background: Wound healing after dental implant surgery is critical to the procedure's success. The aim of this study was to explore the effects of rutin+vitamin C supplementation in wound healing following the placement of dental implants. Methodology: There were 20 participants in this randomized controlled clinical trial who needed dental implants to replace missing teeth. Patients were divided into two groups, and group A received dental implants. Group B received dental implants with vitamin C administration. Follow-up appointments were performed on day 3, day 7, and day 14 post-surgery, during which soft tissue healing and pain response scores were evaluated using the visual analog scale. Postoperative digital panoramas were taken immediately after surgery, 3 months and 6 months postoperatively. Changes in bone density along with the bone-implant interface at the mesial, distal and apical sides were assessed using the digora software. Results: An independent t-test was used to compare the means of variables between the two groups. At the same time, repeated measures were employed to compare the means of variables between two groups. ANOVA was used to compare bone density for the same group at different dates. Significant increased differences were observed at the mesial, distal and apical sides Surrounding the implants of both groups per time. However, the rate of increase was significantly higher in group B The mean difference at the mesial side after 6 months was 21.99 ± 5.48 in the group B and 14.21 ± 4.95 in group A, while it read 21.74 ± 3.56 in the group B and 10.78 ± 3.90 in group A at the distal side and was 18.90 ± 5.91 in the group B and 10.39 ± 3.49 group A at the apical side. Significance was recorded at P = 0.004, P = 0.0001, and 0.001 at the mesial, distal and apical sides respectively. The mean pain score and wound healing were significantly higher in group A as compared to group B, respectively. Conclusion: The rutin c + vitamin c group significantly promoted bone healing and speeded up the osseointegration process and improved soft tissue healing.

Keywords: osseointegration, soft tissue, rutin c, dental implant

Procedia PDF Downloads 149
989 The Colouration of Additive-Manufactured Polymer

Authors: Abisuga Oluwayemisi Adebola, Kerri Akiwowo, Deon de Beer, Kobus Van Der Walt

Abstract:

The convergence of additive manufacturing (AM) and traditional textile dyeing techniques has initiated innovative possibilities for improving the visual application and customization potential of 3D-printed polymer objects. Textile dyeing techniques have progressed to transform fabrics with vibrant colours and complex patterns over centuries. The layer-by-layer deposition characteristic of AM necessitates adaptations in dye application methods to ensure even colour penetration across complex surfaces. Compatibility between dye formulations and polymer matrices influences colour uptake and stability, demanding careful selection and testing of dyes for optimal results. This study investigates the development interaction between these areas, revealing the challenges and opportunities of applying textile dyeing methods to colour 3D-printed polymer materials. The method explores three innovative approaches to colour the 3D-printed polymer object: (a) Additive Manufacturing of a Prototype, (b) the traditional dyebath method, and (c) the contemporary digital sublimation technique. The results show that the layer lines inherent to AM interact with dyes differently and affect the visual outcome compared to traditional textile fibers. Skillful manipulation of textile dyeing methods and dye type used for this research reduced the appearance of these lines to achieve consistency and desirable colour outcomes. In conclusion, integrating textile dyeing techniques into colouring 3D-printed polymer materials connects historical craftsmanship with innovative manufacturing. Overcoming challenges of colour distribution, compatibility, and layer line management requires a holistic approach that blends the technical consistency of AM with the artistic sensitivity of textile dyeing. Hence, applying textile dyeing methods to 3D-printed polymers opens new dimensions of aesthetic and functional possibilities.

Keywords: polymer, 3D-printing, sublimation, textile, dyeing, additive manufacturing

Procedia PDF Downloads 67
988 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 151
987 Exploring Affordable Care Practs in Nigeria’s Health Insurance Discourse

Authors: Emmanuel Chinaguh, Kehinde Adeosun

Abstract:

Nigerians die untimely, with 55.75 years of life expectancy, which is 17.45 below the world average of 73.2 (Worldometer, 2020). This is due, among other factors, to the country's limited access to high-quality healthcare. To increase access to good and affordable healthcare services, the National Health Insurance Authority (NHIA) Bill 2022 – which repealed the National Health Insurance Scheme Act 2004 – was passed into law. Applying Jacob Mey’s (2001) pragmatics act (pract) theory, this study explores how NHIA seeks to actualise these healthcare goals by characterising the general situational prototype or pragmemes and pragmatic acts in institutional communications. Data was sourced from the NHIA operational guidelines, which has 147 pages and four sections, and shared posters on NHIA Nigeria Twitter Handle with 14,200 followers. Digital humanities tools, like AntConc and Voyant, were engaged in the data analysis for text encoding and data visualisation. This study identifies these discourse tokens in the data: advertisement and programmes, standards and accreditation, records and information, and offences and penalties. Advertisement and programmes pract facilitating, propagating, prospecting, advising and informing; standards and accreditation, and records and information pract stating, informing and instructing; and offences and penalties pract stating and sanctioning. These practs combined to advance the goals of affordable care and universal accessibility to quality healthcare services. The pragmatic acts were marked by these pragmatic tools: shared situational knowledge (SSK), relevance (REL), reference (REF) and inference (INF). This paper adds to the understanding of health insurance discourse in Nigeria as a mediated social practice that promotes the health of Nigerians.

Keywords: affordable care, NHIA, Nigeria’s health insurance discourse, pragmatic acts.

Procedia PDF Downloads 85