Search results for: frequency oscillation damping (FOD)
324 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine
Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi
Abstract:
Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography
Procedia PDF Downloads 128323 Bi-objective Network Optimization in Disaster Relief Logistics
Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann
Abstract:
Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks
Procedia PDF Downloads 79322 Harmful Algal Poisoning Symptoms in Coastal Areas of Nigeria
Authors: Medina Kadiri
Abstract:
Nigeria has an extensive coastline of 853 km long between latitude 4°10′ to 6°20′ N and longitude 2°45′ to 8°35′ E and situated in the Gulf of Guinea within the Guinea Current Large Marine Ecosystem. There is a substantial coastal community relying on this region for their livelihood of fishing, aquaculture, mariculture for various sea foods either for consumption or economic sustenance or both. Socio-economic study was conducted, using questionnaires and interview, to investigate the health symptoms of harmful algae experienced by these communities on consumption of sea foods. Eighteen symptoms were recorded. Of the respondents who experienced symptoms after consumption of sea foods, overall, more people (33.5%) experienced vomiting as a symptom, followed by nausea (14.03%) and then diarrhea (13.57%). Others were headache (9.95%), mouth tingling (8.6%) and tiredness (7.24%).The least were muscle pain, rashes, confusion, chills, burning sensation, breathing difficulty and balance difficulty which represented 0.45% each and the rest (dizziness, digestive tract tumors, itching, memory loss, & stomach pain) were less than 3% each. In terms of frequency, the most frequent symptom was diarrhea with 87.5% occurrence, closely followed by vomiting with 81.3%. Tiredness was 75% while nausea was 62.5% and headache 50%. Others such as dizziness, itching, memory loss, mouth tingling and stomach pain had about 40% occurrence or less. The least occurring symptoms were muscle pain, rashes, confusion, chills and balance difficulty and burning sensation occurring only once i.e 6.3%. Breathing difficulty was last but one with 12.5%. Visible symptom from seafood and the particular seafood consumed that prompted the visible symptoms, shows that 3.5% of the entire respondents who ate crab experienced various symptoms ranging from vomiting (2.4%), itching (0.5%) and headache (0.4%). For periwinkle, vomiting had 1.7%, while 1.2% represented diarrhea and nausea symptom comprised 0.8% of all the respondents who ate periwinkle. Some respondents who consumed fish shows that 0.4% of the respondents had Itching. From the respondents who preferred to consume shrimps/crayfish and crab, shrimps/crayfish, crab and periwinkle, the most common illness was tiredness (1.2%), while 0.5% had experienced diarrhea and many others. However, for most respondents who claimed to have no preference for any seafood, with 55.7% affirming this with vomiting being the highest (6.1%), followed closely by mouth tingling/ burning sensation (5.8%). Examining the seasonal influence on visible symptoms revealed that vomiting occurred more in the month of January with 5.5%, while headache and itching were predominant in October with (2.8%). Nausea has 3.1% in January than any season of the year, 2.6% of the entire respondents opined to have experience diarrhea in October than in any other season of the year. Regular evaluation of harmful algal poisoning symptoms is recommended for coastal communities.Keywords: coastal, harmful algae, human poisoning symptoms, Nigeria, phycotoxins
Procedia PDF Downloads 286321 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73320 Engineers 'Write' Job Description: Development of English for Specific Purposes (ESP)-Based Instructional Materials for Engineering Students
Authors: Marjorie Miguel
Abstract:
Globalization offers better career opportunities hence demands more competent professionals efficient for the job. With the transformation of the world industry from competition to collaboration coupled with the rapid development in the field of science and technology, engineers need not only to be technically proficient, but also multilingual-skilled: two characteristics that a global engineer possesses. English often serves as the global language between people from different cultures being the medium mostly used in international business. Ironically, most universities worldwide adapt engineering curriculum heavily built around the language of mathematics not realizing that the goal of an engineer is not only to create and design, but more importantly to promote his creations and designs to the general public through effective communication. This premise led to some developments in the teaching process of English subjects in the tertiary level which include the integration of the technical knowledge related to the area of specialization of the students in the English subjects that they are taking. This is also known as English for Specific Purposes. This study focused on the development of English for Specific Purposes-Based Instructional Materials for Engineering Students of Bulacan State University (BulSU). The materials were tailor-made in which the contents and structure were designed to meet the specific needs of the students as well as the industry. Based on the needs analysis, the needs of the students and the industry were determined to make the study descriptive in nature. The major respondents included fifty engineering students and ten professional engineers from selected institutions. The needs analysis was done and the results showed the common writing difficulties of the students and the writing skills needed among the engineers in the industry. The topics in the instructional materials were established after the needs analysis was conducted. Simple statistical treatment including frequency distribution, percentages, mean, standard deviation, and weighted mean were used. The findings showed that the greatest number of the respondents had an average proficiency rating in writing, and the much-needed skills that must be developed by the engineers are directly related to the preparation and presentation of technical reports about their projects, as well as to the different communications they transmit to their colleagues and superiors. The researcher undertook the following phases in the development of the instructional materials: a design phase, development phase, and evaluation phase. Evaluations are given by some college instructors about the instructional materials generally helped in its usefulness and significance making the study beneficial not only as a career enhancer for BulSU engineering students, but also creating the university one of the educational institutions ready for the new millennium.Keywords: English for specific purposes, instructional materials, needs analysis, write (right) job description
Procedia PDF Downloads 239319 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals
Authors: Shirin Alabdulqader
Abstract:
This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals
Procedia PDF Downloads 131318 Blood Microbiome in Different Metabolic Types of Obesity
Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Dilyara R. Khusnutdinova, Dilyara R. Kamaldinova, Alexander V. Shestopalov
Abstract:
Background. Obese patients have unequal risks of metabolic disorders. It is accepted to distinguish between metabolically healthy obesity (MHO) and metabolically unhealthy obesity (MUHO). MUHO patients have a high risk of metabolic disorders, insulin resistance, and diabetes mellitus. Among the other things, the gut microbiota also contributes to the development of metabolic disorders in obesity. Obesity is accompanied by significant changes in the gut microbial community. In turn, bacterial translocation from the intestine is the basis for the blood microbiome formation. The aim was to study the features of the blood microbiome in patients with various metabolic types of obesity. Patients, materials, methods. The study included 116 healthy donors and 101 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=36) and MUHO (n=53). Quantitative and qualitative assessment of the blood microbiome was based on metagenomic analysis. Blood samples were used to isolate DNA and perform sequencing of the variable v3-v4 region of the 16S rRNA gene. Alpha diversity indices (Simpson index, Shannon index, Chao1 index, phylogenetic diversity, the number of observed operational taxonomic units) were calculated. Moreover, we compared taxa (phyla, classes, orders, and families) in terms of isolation frequency and the taxon share in the total bacterial DNA pool between different patient groups. Results. In patients with MHO, the characteristics of the alpha-diversity of the blood microbiome were like those of healthy donors. However, MUHO was associated with an increase in all diversity indices. The main phyla of the blood microbiome were Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria. Cyanobacteria, TM7, Thermi, Verrucomicrobia, Chloroflexi, Acidobacteria, Planctomycetes, Gemmatimonadetes, and Tenericutes were found to be less significant phyla of the blood microbiome. Phyla Acidobacteria, TM7, and Verrucomicrobia were more often isolated in blood samples of patients with MUHO compared with healthy donors. Obese patients had a decrease in some taxonomic ranks (Bacilli, Caulobacteraceae, Barnesiellaceae, Rikenellaceae, Williamsiaceae). These changes appear to be related to the increased diversity of the blood microbiome observed in obesity. An increase of Lachnospiraceae, Succinivibrionaceae, Prevotellaceae, and S24-7 was noted for MUHO patients, which, apparently, is explained by a magnification in intestinal permeability. Conclusion. Blood microbiome differs in obese patients and healthy donors at class, order, and family levels. Moreover, the nature of the changes is determined by the metabolic type of obesity. MUHO linked to increased diversity of the blood microbiome. This appears to be due to increased microbial translocation from the intestine and non-intestinal sources.Keywords: blood microbiome, blood bacterial DNA, obesity, metabolically healthy obesity, metabolically unhealthy obesity
Procedia PDF Downloads 163317 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 103316 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 347315 Ultrasonic Studies of Polyurea Elastomer Composites with Inorganic Nanoparticles
Authors: V. Samulionis, J. Banys, A. Sánchez-Ferrer
Abstract:
Inorganic nanoparticles are used for fabrication of various composites based on polymer materials because they exhibit a good homogeneity and solubility of the composite material. Multifunctional materials based on composites of a polymer containing inorganic nanotubes are expected to have a great impact on industrial applications in the future. An emerging family of such composites are polyurea elastomers with inorganic MoS2 nanotubes or MoSI nanowires. Polyurea elastomers are a new kind of materials with higher performance than polyurethanes. The improvement of mechanical, chemical and thermal properties is due to the presence of hydrogen bonds between the urea motives which can be erased at high temperature softening the elastomeric network. Such materials are the combination of amorphous polymers above glass transition and crosslinkers which keep the chains into a single macromolecule. Polyurea exhibits a phase separated structure with rigid urea domains (hard domains) embedded in a matrix of flexible polymer chains (soft domains). The elastic properties of polyurea can be tuned over a broad range by varying the molecular weight of the components, the relative amount of hard and soft domains, and concentration of nanoparticles. Ultrasonic methods as non-destructive techniques can be used for elastomer composites characterization. In this manner, we have studied the temperature dependencies of the longitudinal ultrasonic velocity and ultrasonic attenuation of these new polyurea elastomers and composites with inorganic nanoparticles. It was shown that in these polyurea elastomers large ultrasonic attenuation peak and corresponding velocity dispersion exists at 10 MHz frequency below room temperature and this behaviour is related to glass transition Tg of the soft segments in the polymer matrix. The relaxation parameters and Tg depend on the segmental molecular weight of the polymer chains between crosslinking points, the nature of the crosslinkers in the network and content of MoS2 nanotubes or MoSI nanowires. The increase of ultrasonic velocity in composites modified by nanoparticles has been observed, showing the reinforcement of the elastomer. In semicrystalline polyurea elastomer matrices, above glass transition, the first order phase transition from quasi-crystalline to the amorphous state has been observed. In this case, the sharp ultrasonic velocity and attenuation anomalies were observed near the transition temperature TC. Ultrasonic attenuation maximum related to glass transition was reduced in quasicrystalline polyureas indicating less influence of soft domains below TC. The first order phase transition in semicrystalline polyurea elastomer samples has large temperature hysteresis (> 10 K). The impact of inorganic MoS2 nanotubes resulted in the decrease of the first order phase transition temperature in semicrystalline composites.Keywords: inorganic nanotubes, polyurea elastomer composites, ultrasonic velocity, ultrasonic attenuation
Procedia PDF Downloads 300314 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 93313 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study
Authors: Insiya Bhalloo
Abstract:
It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition
Procedia PDF Downloads 357312 Empirical Testing of Hofstede’s Measures of National Culture: A Study in Four Countries
Authors: Nebojša Janićijević
Abstract:
At the end of 1970s, Dutch researcher Geert Hofstede, had conducted an enormous empirical research on the differences between national cultures. In his huge research, he had identified four dimensions of national culture according to which national cultures differ and determined the index for every dimension of national culture for each country that took part in his research. The index showed a country’s position on the continuum between the two extreme poles of the cultural dimensions. Since more than 40 years have passed since Hofstede's research, there is a doubt that, due to the changes in national cultures during that period, they are no longer a good basis for research. The aim of this research is to check the validity of Hofstee's indices of national culture The empirical study conducted in the branches of a multinational company in Serbia, France, the Netherlands and Denmark aimed to determine whether Hofstede’s measures of national culture dimensions are still valid. The sample consisted of 155 employees of one multinational company, where 40 employees came from three countries and 35 employees were from Serbia. The questionnaire that analyzed the positions of national cultures according to the Hofstede’s four dimensions was formulated on the basis of the initial Hofstede’s questionnaire, but it was much shorter and significantly simplified comparing to the original questionnaire. Such instrument had already been used in earlier researches. A statistical analysis of the obtained questionnaire results was done by a simple calculation of the frequency of the provided answers. Due to the limitations in methodology, sample size, instrument, and applied statistical methods, the aim of the study was not to explicitly test the accuracy Hofstede’s indexes but to enlighten the general position of the four observed countries in national culture dimensions and their mutual relations. The study results have indicated that the position of the four observed national cultures (Serbia, France, the Netherlands and Denmark) is precisely the same in three out of four dimensions as Hofstede had described in his research. Furthermore, the differences between national cultures and the relative relations between their positions in three dimensions of national culture correspond to Hofstede’s results. The only deviation from Hofstede’s results is concentrated around the masculinity–femininity dimension. In addition, the study revealed that the degree of power distance is a determinant when choosing leadership style. It has been found that national cultures with high power distance, like Serbia and France, favor one of the two authoritative leadership styles. On the other hand, countries with low power distance, such as the Netherlands and Denmark, prefer one of the forms of democratic leadership styles. This confirms Hofstede’s premises about the impact of power distance on leadership style. The key contribution of the study is that Hofstede’s national culture indexes are still a reliable tool for measuring the positions of countries in national culture dimensions, and they can be applied in the cross-cultural research in management. That was at least the case with four observed countries: Serbia, France, the Netherlands, and Denmark.Keywords: national culture, leadership styles, power distance, collectivism, masculinity, uncertainty avoidance
Procedia PDF Downloads 74311 Assessment the Implications of Regional Transport and Local Emission Sources for Mitigating Particulate Matter in Thailand
Authors: Ruchirek Ratchaburi, W. Kevin. Hicks, Christopher S. Malley, Lisa D. Emberson
Abstract:
Air pollution problems in Thailand have improved over the last few decades, but in some areas, concentrations of coarse particulate matter (PM₁₀) are above health and regulatory guidelines. It is, therefore, useful to investigate how PM₁₀ varies across Thailand, what conditions cause this variation, and how could PM₁₀ concentrations be reduced. This research uses data collected by the Thailand Pollution Control Department (PCD) from 17 monitoring sites, located across 12 provinces, and obtained between 2011 and 2015 to assess PM₁₀ concentrations and the conditions that lead to different levels of pollution. This is achieved through exploration of air mass pathways using trajectory analysis, used in conjunction with the monitoring data, to understand the contribution of different months, an hour of the day and source regions to annual PM₁₀ concentrations in Thailand. A focus is placed on locations that exceed the national standard for the protection of human health. The analysis shows how this approach can be used to explore the influence of biomass burning on annual average PM₁₀ concentration and the difference in air pollution conditions between Northern and Southern Thailand. The results demonstrate the substantial contribution that open biomass burning from agriculture and forest fires in Thailand and neighboring countries make annual average PM₁₀ concentrations. The analysis of PM₁₀ measurements at monitoring sites in Northern Thailand show that in general, high concentrations tend to occur in March and that these particularly high monthly concentrations make a substantial contribution to the overall annual average concentration. In 2011, a > 75% reduction in the extent of biomass burning in Northern Thailand and in neighboring countries resulted in a substantial reduction not only in the magnitude and frequency of peak PM₁₀ concentrations but also in annual average PM₁₀ concentrations at sites across Northern Thailand. In Southern Thailand, the annual average PM₁₀ concentrations for individual years between 2011 and 2015 did not exceed the human health standard at any site. The highest peak concentrations in Southern Thailand were much lower than for Northern Thailand for all sites. The peak concentrations at sites in Southern Thailand generally occurred between June and October and were associated with air mass back trajectories that spent a substantial proportion of time over the sea, Indonesia, Malaysia, and Thailand prior to arrival at the monitoring sites. The results show that emissions reductions from biomass burning and forest fires require action on national and international scales, in both Thailand and neighboring countries, such action could contribute to ensuring compliance with Thailand air quality standards.Keywords: annual average concentration, long-range transport, open biomass burning, particulate matter
Procedia PDF Downloads 182310 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 283309 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 286308 Spatial Pattern of Environmental Noise Levels and Auditory Ailments in Abeokuta Metropolis, Southwestern Nigeria
Authors: Olusegun Oguntoke, Aramide Y. Tijani, Olayide R. Adetunji
Abstract:
Environmental noise has become a major threat to the quality of human life, and it is generally more severe in cities. This study assessed the level of environmental noise, mapped the spatial pattern at different times of the day and examined the association with morbidity of auditory ailments in Abeokuta metropolis. The entire metropolis was divided into 80 cells (areas) of 1000 m by 1000 m; out of which 33 were randomly selected for noise levels assessment. Portable noise meter (AR824) was used to measure noise level, and Global Positioning System (Garmin GPS-72H) was employed to take the coordinates of the sample sites for mapping. Risk map of the noise levels was produced using Kriging interpolation techniques based on the spatial spread of measured noise values across the study area. Data on cases of hearing impairments were collected from four major hospitals in the city. Data collected from field measurements and medical records were subjected to descriptive (frequency and percentage) and inferential (mean, ANOVA and correlation) statistics using SPSS (version 20.0). ArcMap 10.1 was employed for spatial analysis and mapping. Results showed mean noise levels range at morning (42.4 ± 4.14 – 88.2 ± 15.1 dBA), afternoon (45.0 ± 6.72– 86.4 ± 12.5 dBA) and evening (51.0 ± 6.55–84.4 ± 5.19 dBA) across the study area. The interpolated maps identified Kuto, Okelowo, Isale-Igbein, and Sapon as high noise risk areas. These are the central business district and nucleus of Abeokuta metropolis where commercial activities, high traffic volume, and clustered buildings exist. The monitored noise levels varied significantly among the sampled areas in the morning, afternoon and evening (p < 0.05). A significant correlation was found between diagnosed cases of auditory ailments and noise levels measured in the morning (r=0.39 at p < 0.05). Common auditory ailments found across the metropolis included impaired hearing (25.8%), tinnitus (16.4%) and otitis (15.0%). The most affected age groups were between 11-30 years while the male gender had more cases of hearing impairments (51.2%) than the females. The study revealed that environmental noise levels exceeded the recommended standards in the morning, afternoon and evening in 60.6%, 61% and 72.7% of the sampled areas respectively. Summarily, environmental noise in the study area is high and contributes to the morbidity of auditory ailments. Areas identified as hot spots of noise pollution should be avoided in the location of noise sensitive activities while environmental noise monitoring should be included as part of the mandate of the regulatory agencies in Nigeria.Keywords: noise pollution, associative analysis, auditory impairment, urban, human exposure
Procedia PDF Downloads 144307 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 53306 Measuring the Impact of Implementing an Effective Practice Skills Training Model in Youth Detention
Authors: Phillipa Evans, Christopher Trotter
Abstract:
Aims: This study aims to examine the effectiveness of a practice skills framework implemented in three youth detention centres in Juvenile Justice in New South Wales (NSW), Australia. The study is supported by a grant from and Australian Research Council and NSW Juvenile Justice. Recent years have seen a number of incidents in youth detention centres in Australia and other places. These have led to inquiries and reviews with some suggesting that detention centres often do not even meet basic human rights and do little in terms of providing opportunities for rehabilitation of residents. While there is an increasing body of research suggesting that community based supervision can be effective in reducing recidivism if appropriate skills are used by supervisors, there has been less work considering worker skills in youth detention settings. The research that has been done, however, suggest that teaching interpersonal skills to youth officers may be effective in enhancing the rehabilitation culture of centres. Positive outcomes have been seen in a UK detention centre for example, from teaching staff to do five-minute problem-solving interventions. The aim of this project is to examine the effectiveness of training and coaching youth detention staff in three NSW detention centres in interpersonal practice skills. Effectiveness is defined in terms of reductions in the frequency of critical incidents and improvements in the well-being of staff and young people. The research is important as the results may lead to the development of more humane and rehabilitative experiences for young people. Method: The study involves training staff in core effective practice skills and supporting staff in the use of those skills through supervision and de-briefing. The core effective practice skills include role clarification, pro-social modelling, brief problem solving, and relationship skills. The training also addresses some of the background to criminal behaviour including trauma. Data regarding critical incidents and well-being before and after the program implementation are being collected. This involves interviews with staff and young people, the completion of well-being scales, and examination of departmental records regarding critical incidents. In addition to the before and after comparison a matched control group which is not offered the intervention is also being used. The study includes more than 400 young people and 100 youth officers across 6 centres including the control sites. Data collection includes interviews with workers and young people, critical incident data such as assaults, use of lock ups and confinement and school attendance. Data collection also includes analysing video-tapes of centre activities for changes in the use of staff skills. Results: The project is currently underway with ongoing training and supervision. Early results will be available for the conference.Keywords: custody, practice skills, training, youth workers
Procedia PDF Downloads 103305 Curcumin Nanomedicine: A Breakthrough Approach for Enhanced Lung Cancer Therapy
Authors: Shiva Shakori Poshteh
Abstract:
Lung cancer is a highly prevalent and devastating disease, representing a significant global health concern with profound implications for healthcare systems and society. Its high incidence, mortality rates, and late-stage diagnosis contribute to its formidable nature. To address these challenges, nanoparticle-based drug delivery has emerged as a promising therapeutic strategy. Curcumin (CUR), a natural compound derived from turmeric, has garnered attention as a potential nanomedicine for lung cancer treatment. Nanoparticle formulations of CUR offer several advantages, including improved drug delivery efficiency, enhanced stability, controlled release kinetics, and targeted delivery to lung cancer cells. CUR exhibits a diverse array of effects on cancer cells. It induces apoptosis by upregulating pro-apoptotic proteins, such as Bax and Bak, and downregulating anti-apoptotic proteins, such as Bcl-2. Additionally, CUR inhibits cell proliferation by modulating key signaling pathways involved in cancer progression. It suppresses the PI3K/Akt pathway, crucial for cell survival and growth, and attenuates the mTOR pathway, which regulates protein synthesis and cell proliferation. CUR also interferes with the MAPK pathway, which controls cell proliferation and survival, and modulates the Wnt/β-catenin pathway, which plays a role in cell proliferation and tumor development. Moreover, CUR exhibits potent antioxidant activity, reducing oxidative stress and protecting cells from DNA damage. Utilizing CUR as a standalone treatment is limited by poor bioavailability, lack of targeting, and degradation susceptibility. Nanoparticle-based delivery systems can overcome these challenges. They enhance CUR’s bioavailability, protect it from degradation, and improve absorption. Further, Nanoparticles enable targeted delivery to lung cancer cells through surface modifications or ligand-based targeting, ensuring sustained release of CUR to prolong therapeutic effects, reduce administration frequency, and facilitate penetration through the tumor microenvironment, thereby enhancing CUR’s access to cancer cells. Thus, nanoparticle-based CUR delivery systems promise to improve lung cancer treatment outcomes. This article provides an overview of lung cancer, explores CUR nanoparticles as a treatment approach, discusses the benefits and challenges of nanoparticle-based drug delivery, and highlights prospects for CUR nanoparticles in lung cancer treatment. Future research aims to optimize these delivery systems for improved efficacy and patient prognosis in lung cancer.Keywords: lung cancer, curcumin, nanomedicine, nanoparticle-based drug delivery
Procedia PDF Downloads 72304 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)
Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich
Abstract:
The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian
Procedia PDF Downloads 294303 Lipid-Coated Magnetic Nanoparticles for Frequency Triggered Drug Delivery
Authors: Yogita Patil-Sen
Abstract:
Superparamagnetic Iron Oxide Nanoparticles (SPIONs) have become increasingly important materials for separation of specific bio-molecules, drug delivery vehicle, contrast agent for MRI and magnetic hyperthermia for cancer therapy. Hyperthermia is emerging as an alternative cancer treatment to the conventional radio- and chemo-therapy, which have harmful side effects. When subjected to an alternating magnetic field, the magnetic energy of SPIONs is converted into thermal energy due to movement of particles. The ability of SPIONs to generate heat and potentially kill cancerous cells, which are more susceptible than the normal cells to temperatures higher than 41 °C forms the basis of hyerpthermia treatement. The amount of heat generated depends upon the magnetic properties of SPIONs which in turn is affected by their properties such as size and shape. One of the main problems associated with SPIONs is particle aggregation which limits their employability in in vivo drug delivery applications and hyperthermia cancer treatments. Coating the iron oxide core with thermally responsive lipid based nanostructures tend to overcome the issue of aggregation as well as improve biocompatibility and can enhance drug loading efficiency. Herein we report suitability of SPIONs and silica coated core-shell SPIONs, which are further, coated with various lipids for drug delivery and magnetic hyperthermia applications. The synthesis of nanoparticles is carried out using the established methods reported in the literature with some modifications. The nanoparticles are characterised using Infrared spectroscopy (IR), X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM) and Vibrating Sample Magnetometer (VSM). The heating ability of nanoparticles is tested under alternating magnetic field. The efficacy of the nanoparticles as drug carrier is also investigated. The loading of an anticancer drug, Doxorubicin at 18 °C is measured up to 48 hours using UV-visible spectrophotometer. The drug release profile is obtained under thermal incubation condition at 37 °C and compared with that under the influence of alternating magnetic field. The results suggest that the nanoparticles exhibit superparamagnetic behaviour, although coating reduces the magnetic properties of the particles. Both the uncoated and coated particles show good heating ability, again it is observed that coating decreases the heating behaviour of the particles. However, coated particles show higher drug loading efficiency than the uncoated particles and the drug release is much more controlled under the alternating magnetic field. Thus, the results demonstrate that lipid coated SPIONs exhibit potential as drug delivery vehicles for magnetic hyperthermia based cancer therapy.Keywords: drug delivery, hyperthermia, lipids, superparamagnetic iron oxide nanoparticles (SPIONS)
Procedia PDF Downloads 232302 Flexible Programmable Circuit Board Electromagnetic 1-D Scanning Micro-Mirror Laser Rangefinder by Active Triangulation
Authors: Vixen Joshua Tan, Siyuan He
Abstract:
Scanners have been implemented within single point laser rangefinders, to determine the ranges within an environment by sweeping the laser spot across the surface of interest. The research motivation is to exploit a smaller and cheaper alternative scanning component for the emitting portion within current designs of laser rangefinders. This research implements an FPCB (Flexible Programmable Circuit Board) Electromagnetic 1-Dimensional scanning micro-mirror as a scanning component for laser rangefinding by means of triangulation. The prototype uses a laser module, micro-mirror, and receiver. The laser module is infrared (850 nm) with a power output of 4.5 mW. The receiver consists of a 50 mm convex lens and a 45mm 1-dimensional PSD (Position Sensitive Detector) placed at the focal length of the lens at 50 mm. The scanning component is an elliptical Micro-Mirror attached onto an FPCB Structure. The FPCB structure has two miniature magnets placed symmetrically underneath it on either side, which are then electromagnetically actuated by small solenoids, causing the FPCB to mechanically rotate about its torsion beams. The laser module projects a laser spot onto the micro-mirror surface, hence producing a scanning motion of the laser spot during the rotational actuation of the FPCB. The receiver is placed at a fixed distance from the micro-mirror scanner and is oriented to capture the scanning motion of the laser spot during operation. The elliptical aperture dimensions of the micro-mirror are 8mm by 5.5 mm. The micro-mirror is supported by an FPCB with two torsion beams with dimensions of 4mm by 0.5mm. The overall length of the FPCB is 23 mm. The voltage supplied to the solenoids is sinusoidal with an amplitude of 3.5 volts and 4.5 volts to achieve optical scanning angles of +/- 10 and +/- 17 degrees respectively. The operating scanning frequency during experiments was 5 Hz. For an optical angle of +/- 10 degrees, the prototype is capable of detecting objects within the ranges from 0.3-1.2 meters with an error of less than 15%. As for an optical angle of +/- 17 degrees the measuring range was from 0.3-0.7 meters with an error of 16% or less. Discrepancy between the experimental and actual data is possibly caused by misalignment of the components during experiments. Furthermore, the power of the laser spot collected by the receiver gradually decreased as the object was placed further from the sensor. A higher powered laser will be tested to potentially measure further distances more accurately. Moreover, a wide-angled lens will be used in future experiments when higher scanning angles are used. Modulation within the current and future higher powered lasers will be implemented to enable the operation of the laser rangefinder prototype without the use of safety goggles.Keywords: FPCB electromagnetic 1-D scanning micro-mirror, laser rangefinder, position sensitive detector, PSD, triangulation
Procedia PDF Downloads 135301 Molecular Detection of mRNA bcr-abl and Circulating Leukemic Stem Cells CD34+ in Patients with Acute Lymphoblastic Leukemia and Chronic Myeloid Leukemia and Its Association with Clinical Parameters
Authors: B. Gonzalez-Yebra, H. Barajas, P. Palomares, M. Hernandez, O. Torres, M. Ayala, A. L. González, G. Vazquez-Ortiz, M. L. Guzman
Abstract:
Leukemia arises by molecular alterations of the normal hematopoietic stem cell (HSC) transforming it into a leukemic stem cell (LSC) with high cell proliferation, self-renewal, and cell differentiation. Chronic myeloid leukemia (CML) originates from an LSC-leading to elevated proliferation of myeloid cells and acute lymphoblastic leukemia (ALL) originates from an LSC development leading to elevated proliferation of lymphoid cells. In both cases, LSC can be identified by multicolor flow cytometry using several antibodies. However, to date, LSC levels in peripheral blood (PB) are not established well enough in ALL and CML patients. On the other hand, the detection of the minimal residue disease (MRD) in leukemia is mainly based on the identification of the mRNA bcr-abl gene in CML patients and some other genes in ALL patients. There is no a properly biomarker to detect MDR in both types of leukemia. The objective of this study was to determine mRNA bcr-abl and the percentage of LSC in peripheral blood of patients with CML and ALL and identify a possible association between the amount of LSC in PB and clinical data. We included in this study 19 patients with Leukemia. A PB sample was collected per patient and leukocytes were obtained by Ficoll gradient. The immunophenotype for LSC CD34+ was done by flow cytometry analysis with CD33, CD2, CD14, CD16, CD64, HLA-DR, CD13, CD15, CD19, CD10, CD20, CD34, CD38, CD71, CD90, CD117, CD123 monoclonal antibodies. In addition, to identify the presence of the mRNA bcr-abl by RT-PCR, the RNA was isolated using TRIZOL reagent. Molecular (presence of mRNA bcr-abl and LSC CD34+) and clinical results were analyzed with descriptive statistics and a multiple regression analysis was performed to determine statistically significant association. In total, 19 patients (8 patients with ALL and 11 patients with CML) were analyzed, 9 patients with de novo leukemia (ALL = 6 and CML = 3) and 10 under treatment (ALL = 5 and CML = 5). The overall frequency of mRNA bcr-abl was 31% (6/19), and it was negative in ALL patients and positive in 80% in CML patients. On the other hand, LSC was determined in 16/19 leukemia patients (%LSC= 0.02-17.3). The Novo patients had higher percentage of LSC (0.26 to 17.3%) than patients under treatment (0 to 5.93%). The amount of LSC was significantly associated with the amount of LSC were: absence of treatment, the absence of splenomegaly, and a lower number of leukocytes, negative association for the clinical variables age, sex, blasts, and mRNA bcr-abl. In conclusion, patients with de novo leukemia had a higher percentage of circulating LSC than patients under treatment, and it was associated with clinical parameters as lack of treatment, absence of splenomegaly and a lower number of leukocytes. The mRNA bcr-abl detection was only possible in the series of patients with CML, and molecular detection of LSC could be identified in the peripheral blood of all leukemia patients, we believe the identification of circulating LSC may be used as biomarker for the detection of the MRD in leukemia patients.Keywords: stem cells, leukemia, biomarkers, flow cytometry
Procedia PDF Downloads 356300 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 14299 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy
Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos
Abstract:
Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.Keywords: decision making, cognitive load, financial literacy, numeracy
Procedia PDF Downloads 182298 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 289297 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 240296 Application of Response Surface Methodology to Assess the Impact of Aqueous and Particulate Phosphorous on Diazotrophic and Non-Diazotrophic Cyanobacteria Associated with Harmful Algal Blooms
Authors: Elizabeth Crafton, Donald Ott, Teresa Cutright
Abstract:
Harmful algal blooms (HABs), more notably cyanobacteria-dominated HABs, compromise water quality, jeopardize access to drinking water and are a risk to public health and safety. HABs are representative of ecosystem imbalance largely caused by environmental changes, such as eutrophication, that are associated with the globally expanding human population. Cyanobacteria-dominated HABs are anticipated to increase in frequency, magnitude, and are predicted to plague a larger geographical area as a result of climate change. The weather pattern is important as storm-driven, pulse-input of nutrients have been correlated to cyanobacteria-dominated HABs. The mobilization of aqueous and particulate nutrients and the response of the phytoplankton community is an important relationship in this complex phenomenon. This relationship is most apparent in high-impact areas of adequate sunlight, > 20ᵒC, excessive nutrients and quiescent water that corresponds to ideal growth of HABs. Typically the impact of particulate phosphorus is dismissed as an insignificant contribution; which is true for areas that are not considered high-impact. The objective of this study was to assess the impact of a simulated storm-driven, pulse-input of reactive phosphorus and the response of three different cyanobacteria assemblages (~5,000 cells/mL). The aqueous and particulate sources of phosphorus and changes in HAB were tracked weekly for 4 weeks. The first cyanobacteria composition consisted of Planktothrix sp., Microcystis sp., Aphanizomenon sp., and Anabaena sp., with 70% of the total population being non-diazotrophic and 30% being diazotrophic. The second was comprised of Anabaena sp., Planktothrix sp., and Microcystis sp., with 87% diazotrophic and 13% non-diazotrophic. The third composition has yet to be determined as these experiments are ongoing. Preliminary results suggest that both aqueous and particulate sources are contributors of total reactive phosphorus in high-impact areas. The results further highlight shifts in the cyanobacteria assemblage after the simulated pulse-input. In the controls, the reactors dosed with aqueous reactive phosphorus maintained a constant concentration for the duration of the experiment; whereas, the reactors that were dosed with aqueous reactive phosphorus and contained soil decreased from 1.73 mg/L to 0.25 mg/L of reactive phosphorus from time zero to 7 days; this was higher than the blank (0.11 mg/L). Suggesting a binding of aqueous reactive phosphorus to sediment, which is further supported by the positive correlation observed between total reactive phosphorus concentration and turbidity. The experiments are nearly completed and a full statistical analysis will be completed of the results prior to the conference.Keywords: Anabaena, cyanobacteria, harmful algal blooms, Microcystis, phosphorous, response surface methodology
Procedia PDF Downloads 167295 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160