Search results for: research directions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25301

Search results for: research directions

1361 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 392
1360 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).

Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance

Procedia PDF Downloads 167
1359 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance

Authors: Sarah Crawford, Gerry Lesley

Abstract:

This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.

Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors

Procedia PDF Downloads 67
1358 Perception of Nurses and Caregivers on Fall Preventive Management for Hospitalized Children Based on Ecological Model

Authors: Mirim Kim, Won-Oak Oh

Abstract:

Purpose: The purpose of this study was to identify hospitalized children's fall risk factors, fall prevention status and fall prevention strategies recognized by nurses and caregivers of hospitalized children and present an ecological model for fall preventive management in hospitalized children. Method: The participants of this study were 14 nurses working in medical institutions and having more than one year of child care experience and 14 adult caregivers of children under 6 years of age receiving inpatient treatment at a medical institution. One to one interview was attempted to identify their perception of fall preventive management. Transcribed data were analyzed through latent content analysis method. Results: Fall risk factors in hospitalized children were 'unpredictable behavior', 'instability', 'lack of awareness about danger', 'lack of awareness about falls', 'lack of child control ability', 'lack of awareness about the importance of fall prevention', 'lack of sensitivity to children', 'untidy environment around children', 'lack of personalized facilities for children', 'unsafe facility', 'lack of partnership between healthcare provider and caregiver', 'lack of human resources', 'inadequate fall prevention policy', 'lack of promotion about fall prevention', 'a performanceism oriented culture'. Fall preventive management status of hospitalized children were 'absence of fall prevention capability', 'efforts not to fall', 'blocking fall risk situation', 'limit the scope of children's activity when there is no caregiver', 'encourage caregivers' fall prevention activities', 'creating a safe environment surrounding hospitalized children', 'special management for fall high risk children', 'mutual cooperation between healthcare providers and caregivers', 'implementation of fall prevention policy', 'providing guide signs about fall risk'. Fall preventive management strategies of hospitalized children were 'restrain dangerous behavior', 'inspiring awareness about fall', 'providing fall preventive education considering the child's eye level', 'efforts to become an active subject of fall prevention activities', 'providing customed fall prevention education', 'open communication between healthcare providers and caregivers', 'infrastructure and personnel management to create safe hospital environment', 'expansion fall prevention campaign', 'development and application of a valid fall assessment instrument', 'conversion of awareness about safety'. Conclusion: In this study, the ecological model of fall preventive management for hospitalized children reflects various factors that directly or indirectly affect the fall prevention of hospitalized children. Therefore, these results can be considered as useful baseline data for developing systematic fall prevention programs and hospital policies to prevent fall accident in hospitalized children. Funding: This study was funded by the National Research Foundation of South Korea (grant number NRF-2016R1A2B1015455).

Keywords: fall down, safety culture, hospitalized children, risk factors

Procedia PDF Downloads 163
1357 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 73
1356 Enhancing Mental Health Services Through Strategic Planning: The East Tennessee State University Counseling Center’s 2024-2028 Plan

Authors: R. M. Kilonzo, S. Bedingfield, K. Smith, K. Hudgins Smith, K. Couper, R. Ratley, Z. Taylor, A. Engelman, M. Renne

Abstract:

Introduction: The mental health needs of university students continue to evolve, necessitating a strategic approach to service delivery. The East Tennessee State University (ETSU) Counseling Center developed its inaugural Strategic Plan (2024-2028) to enhance student mental health services. The plan focuses on improving access, quality of care, and service visibility, aligning with the university’s mission to support academic success and student well-being. Aim: This strategic plan aims to establish a comprehensive framework for delivering high-quality, evidence-based mental health services to ETSU students, addressing current challenges, and anticipating future needs. Methods: The development of the strategic plan was a collaborative effort involving the Counseling Center’s leadership, staff, with technical support from Doctor of Public Health-community and behavioral health intern. Multiple workshops, online/offline reviews, and stakeholder consultations were held to ensure a robust and inclusive process. A SWOT analysis and stakeholder mapping were conducted to identify strengths, weaknesses, opportunities, and challenges. Key performance indicators (KPIs) were set to measure service utilization, satisfaction, and outcomes. Results: The plan resulted in four strategic priorities: service application, visibility/accessibility, safety and satisfaction, and training programs. Key objectives include expanding counseling services, improving service access through outreach, reducing stigma, and increasing peer support programs. The plan also focuses on continuous quality improvement through data-driven assessments and research initiatives. Immediate outcomes include expanded group therapy, enhanced staff training, and increased mental health literacy across campus. Conclusion and Recommendation: The strategic plan provides a roadmap for addressing the mental health needs of ETSU students, with a clear focus on accessibility, inclusivity, and evidence-based practices. Implementing the plan will strengthen the Counseling Center’s capacity to meet the diverse needs of the student population. To ensure sustainability, it is recommended that the center continuously assess student needs, foster partnerships with university and external stakeholders, and advocate for increased funding to expand services and staff capacity.

Keywords: strategic plan, university counseling center, mental health, students

Procedia PDF Downloads 16
1355 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 161
1354 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data

Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang

Abstract:

Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.

Keywords: case fatality rate, dengue, dengue vaccine, the elderly

Procedia PDF Downloads 280
1353 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data

Authors: Sara Bonetti

Abstract:

The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.

Keywords: data literacy, early childhood professionals, intersectionality, quantitative data

Procedia PDF Downloads 252
1352 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
1351 Biotechnological Methods for the Grouting of the Tunneling Space

Authors: V. Ivanov, J. Chu, V. Stabnikov

Abstract:

Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.

Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space

Procedia PDF Downloads 207
1350 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance

Authors: Rabindranath Jana, Biswajit Maity, Keka Rana

Abstract:

The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.

Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability

Procedia PDF Downloads 411
1349 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 79
1348 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 430
1347 Environmental Threats and Great Barrier Reef: A Vulnerability Assessment of World’s Best Tropical Marine Ecosystems

Authors: Ravi Kant Anand, Nikkey Keshri

Abstract:

The Great Barrier Reef of Australia is known for its beautiful landscapes and seascapes with ecological importance. This site was selected as a World Heritage site in 1981 and popularized internationally for tourism, recreational activities and fishing. But the major environmental hazards such as climate change, pollution, overfishing and shipping are making worst the site of marine ecosystem. Climate change is directly hitting on Great Barrier Reef through increasing level of sea, acidification of ocean, increasing in temperature, uneven precipitation, changes in the El Nino and increasing level of cyclones and storms. Apart from that pollution is second biggest factor which vanishing the coral reef ecosystem. Pollution including over increasement of pesticides and chemicals, eutrophication, pollution through mining, sediment runoff, loss of coastal wetland and oil spills. Coral bleaching is the biggest problem because of the environmental threatening agents. Acidification of ocean water reduced the formation of calcium carbonate skeleton. The floral ecosystem (including sea grasses and mangroves) of ocean water is the key source of food for fishes and other faunal organisms but the powerful waves, extreme temperature, destructive storms and river run- off causing the threat for them. If one natural system is under threat, it means the whole marine food web is affected from algae to whale. Poisoning of marine water through different polluting agents have been affecting the production of corals, breeding of fishes, weakening of marine health and increased in death of fishes and corals. In lieu of World Heritage site, tourism sector is directly affected and causing increasement in unemployment. Fishing sector also affected. Fluctuation in the temperature of ocean water affects the production of corals because it needs desolate place, proper sunlight and temperature up to 21 degree centigrade. But storms, El Nino, rise in temperature and sea level are induced for continuous reduction of the coral production. If we do not restrict the environmental problems of Great Barrier Reef than the best known ecological beauty with coral reefs, pelagic environments, algal meadows, coasts and estuaries, mangroves forests and sea grasses, fish species, coral gardens and the one of the best tourist spots will lost in upcoming years. My research will focus on the different environmental threats, its socio-economic impacts and different conservative measures.

Keywords: climate change, overfishing, acidification, eutrophication

Procedia PDF Downloads 373
1346 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 230
1345 Innovative Technologies Functional Methods of Dental Research

Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko

Abstract:

Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.

Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)

Procedia PDF Downloads 279
1344 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes

Authors: Mohsen Hababalahi, Morteza Bastami

Abstract:

Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.

Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method

Procedia PDF Downloads 510
1343 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context

Authors: Ana Breda, Catarina Cruz

Abstract:

Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.

Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game

Procedia PDF Downloads 25
1342 Biocompatibility Tests for Chronic Application of Sieve-Type Neural Electrodes in Rats

Authors: Jeong-Hyun Hong, Wonsuk Choi, Hyungdal Park, Jinseok Kim, Junesun Kim

Abstract:

Identifying the chronic functions of an implanted neural electrode is an important factor in acquiring neural signals through the electrode or restoring the nerve functions after peripheral nerve injury. The purpose of this study was to investigate the biocompatibility of the chronic implanted neural electrode into the sciatic nerve. To do this, a sieve-type neural electrode was implanted at proximal and distal ends of a transected sciatic nerve as an experimental group (Sieve group, n=6), and the end-to-end epineural repair was operated with the cut sciatic nerve as a control group (reconstruction group, n=6). All surgeries were performed on the sciatic nerve of the right leg in Sprague Dawley rats. Behavioral tests were performed before and 1, 4, 7, 10, 14, and weekly days until 5 months following surgery. Changes in sensory function were assessed by measuring paw withdrawal responses to mechanical and cold stimuli. Motor function was assessed by motion analysis using a Qualisys program, which showed a range of motion (ROM) related to the joints. Neurofilament-heavy chain and fibronectin expression were detected 5 months after surgery. In both groups, the paw withdrawal response to mechanical stimuli was slightly decreased from 3 weeks after surgery and then significantly decreased at 6 weeks after surgery. The paw withdrawal response to cold stimuli was increased from 4 days following surgery in both groups and began to decrease from 6 weeks after surgery. The ROM of the ankle joint was showed a similar pattern in both groups. There was significantly increased from 1 day after surgery and then decreased from 4 days after surgery. Neurofilament-heavy chain expression was observed throughout the entire sciatic nerve tissues in both groups. Especially, the sieve group was showed several neurofilaments that passed through the channels of the sieve-type neural electrode. In the reconstruction group, however, a suture line was seen through neurofilament-heavy chain expression up to 5 months following surgery. In the reconstruction group, fibronectin was detected throughout the sciatic nerve. However, in the sieve group, the fibronectin was observed only in the surrounding nervous tissues of an implanted neural electrode. The present results demonstrated that the implanted sieve-type neural electrode induced a focal inflammatory response. However, the chronic implanted sieve-type neural electrodes did not cause any further inflammatory response following peripheral nerve injury, suggesting the possibility of the chronic application of the sieve-type neural electrodes. This work was supported by the Basic Science Research Program funded by the Ministry of Science (2016R1D1A1B03933986), and by the convergence technology development program for bionic arm (2017M3C1B2085303).

Keywords: biocompatibility, motor functions, neural electrodes, peripheral nerve injury, sensory functions

Procedia PDF Downloads 147
1341 The Role of High Schools in Saudi Arabia in Supporting Young Adults with Intellectual Disabilities with Their Transition to Post-secondary Education

Authors: Sohil I. Alqazlan

Abstract:

Introduction and Objectives: There is limited research focusing on young adults with intellectual disabilities (ID) and their experiences after finishing compulsory education, especially in the Middle Eastern/Arab countries. This paper aims to further understand the lives of young adults with ID in Riyadh [the capital city of Saudi Arabia], particularly as they go on to access Post-Secondary Education [PSE]. As part of this study, it is important to understand the roles of high schools in Riyadh in terms of preparing their students for post-school life. To achieve this, the researcher has asked Saudi Arabia’s Ministry of Education to provide student transition plans (TPs) for post-school opportunities. However, and unfortunately, high schools in Riyadh do not use transition plans for their students. Therefore, the researcher has requested individual education plans (IEPs) for students with ID in their final year at high school to find the type of support the students had regarding both their long- and short-term goals that might help them access PSE or the labour market. Methods: The researcher analysed 10 IEPs of students in their final year at high school. To achieve the aim of the study, the researcher compared these IEPs with expectations set out in the official IEP framework of the MoE in Saudi Arabia, such as collaboration on the IEP sample and the focus on adult life. By analysing the students’ IEPs in terms of various goals, this study attempts to highlight skills that might offer students more independence after finishing compulsory education and going on to PSE. Results: Unfortunately, communication between IEP team members proved persistently absent in the sample. This was clear from the fact that none of the team members, apart from the SEN teachers, had signed any of the IEPs. Thus, none of the daily or weekly goals outlined were sent to parents to review at home. As a result of this, there were no goals in the IEPs that clearly referred to PSE. However, some long-term goals were set which might help those with ID become more independent in their adult life. For example, in the IEPs, which dealt with computer skills, the student had goals related to using Microsoft Word. Finally, just one goal of these IEPs set an important independent skill for the young adults with ID: “the student will learn how to use public transportation”. Conclusions: From analysing the ten IEPs, it was clear that SEN teachers in Riyadh schools were working without any help from other professionals. The students with ID, as well as their families, were not consulted on their views on important goals. Therefore, more work needs to be done with the students regarding their transition to PSE, perhaps by building partnerships between high schools and potential PSE institutions. Finally, more PSE programmes and a higher level of employer awareness could help create a bridge for students transferring from high school to PSE. Schools could also focus their IEP goals towards specific PSE programmes the student might attend, which could increase their chances of success.

Keywords: high school, post-secondary education, PSE, students with intellectual disabilities

Procedia PDF Downloads 169
1340 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate

Authors: Tomas Makaras, Gintaras Svecevičius

Abstract:

Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).

Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects

Procedia PDF Downloads 270
1339 Energy Harvesting and Storage System for Marine Applications

Authors: Sayem Zafar, Mahmood Rahi

Abstract:

Rigorous international maritime regulations are in place to limit boat and ship hydrocarbon emissions. The global sustainability goals are reducing the fuel consumption and minimizing the emissions from the ships and boats. These maritime sustainability goals have attracted a lot of research interest. Energy harvesting and storage system is designed in this study based on hybrid renewable and conventional energy systems. This energy harvesting and storage system is designed for marine applications, such as, boats and small ships. These systems can be utilized for mobile use or off-grid remote electrification. This study analyzed the use of micro power generation for boats and small ships. The energy harvesting and storage system has two distinct systems i.e. dockside shore-based system and on-board system. The shore-based system consists of a small wind turbine, photovoltaic (PV) panels, small gas turbine, hydrogen generator and high-pressure hydrogen storage tank. This dockside system is to provide easy access to the boats and small ships for supply of hydrogen. The on-board system consists of hydrogen storage tanks and fuel cells. The wind turbine and PV panels generate electricity to operate electrolyzer. A small gas turbine is used as a supplementary power system to contribute in case the hybrid renewable energy system does not provide the required energy. The electrolyzer performs the electrolysis on distilled water to produce hydrogen. The hydrogen is stored in high-pressure tanks. The hydrogen from the high-pressure tank is filled in the low-pressure tanks on-board seagoing vessels to operate the fuel cell. The boats and small ships use the hydrogen fuel cell to provide power to electric propulsion motors and for on-board auxiliary use. For shore-based system, a small wind turbine with the total length of 4.5 m and the disk diameter of 1.8 m is used. The small wind turbine dimensions make it big enough to be used to charge batteries yet small enough to be installed on the rooftops of dockside facility. The small dimensions also make the wind turbine easily transportable. In this paper, PV, sizing and solar flux are studied parametrically. System performance is evaluated under different operating and environmental conditions. The parametric study is conducted to evaluate the energy output and storage capacity of energy storage system. Results are generated for a wide range of conditions to analyze the usability of hybrid energy harvesting and storage system. This energy harvesting method significantly improves the usability and output of the renewable energy sources. It also shows that small hybrid energy systems have promising practical applications.

Keywords: energy harvesting, fuel cell, hybrid energy system, hydrogen, wind turbine

Procedia PDF Downloads 136
1338 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: mortality map, spatial patterns, statistical area, variation

Procedia PDF Downloads 258
1337 A Study on Unplanned Settlement in Kabul City

Authors: Samir Ranjbar, Nasrullah Istanekzai

Abstract:

According to a report published in The Guardian, Kabul, the capital city of Afghanistan is the fifth fastest growing city in the world, whose population has increased fourfold since 2001 from 1.2 million to 4.8 million people. The main reason for this increment is identified as the return of Afghans migrated during the civil war. In addition to the return of immigrants, a steep economic growth due to foreign assistance in last decade creating lots of job opportunities in Kabul resulted in the attraction of individuals from the neighboring provinces as well. However, the development of urban facilities such as water supply system, housing transportation and waste management systems has yet to catch up with this rapid increase in population. Since Kabul city has developed traditionally and municipal governance had very limited capacity to implement municipal bylaws. As an unwanted consequence of this growth 70% of Kabul citizens contributed to developing informal settlement for which we can say that around three million people living in informally settled areas, lacking the very vital social and physical infrastructures of livelihood. This research focuses on a region with 30 ha area and 2100 people residents in the center of Kabul city. A comprehensive land readjustment concept plan has been formulated for this area. Through this concept plan, physical and social infrastructure has been demonstrated and analyzed. Findings of this paper propose a solution for the problems of this unplanned area in Kabul which is readjusting of unplanned area by a self-supporting process. This process does not need governmental budget and can be applied by government, private sectors and landowner associations. Furthermore, by implementing the Land Readjustment process, conceptual plans can be built for unplanned areas, maximum facilities can be brought to the residents’ urban life, improve the environment for the users’ benefit, promote the culture and sense of cooperation, participation and coexistence in the mind of people, improving the transport system, improvement in economic status (the value of land increases due to infrastructure availability and land legalization). In addition to all these benefits for the public, we can raise the revenue of government by collecting the taxes from landowners. This process is implemented in most of countries of the world, it was implemented for the first time in Germany and after that in most cities of Japan as well, and is known as one of the effective processes for infrastructural development. To sum up, the notable characteristic of the Land readjustment process is that it works on the concept of mutual interest in which both landowners and the government take advantage. However, in this process, the engagement of community is very important and without public cooperation, this process can face the failure.

Keywords: land readjustment, informal settlement, Kabul, Afghanistan

Procedia PDF Downloads 252
1336 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 82
1335 The Effect of Disseminating Basic Knowledge on Radiation in Emergency Distance Learning of COVID-19

Authors: Satoko Yamasaki, Hiromi Kawasaki, Kotomi Yamashita, Susumu Fukita, Kei Sounai

Abstract:

People are susceptible to rumors when the cause of their health problems is unknown or invisible. In order for individuals to be unaffected by rumors, they need basic knowledge and correct information. Community health nursing classes use cases where basic knowledge of radiation can be utilized on a regular basis, thereby teaching that basic knowledge is important in preventing anxiety caused by rumors. Nursing students need to learn that preventive activities are essential for public health nursing care. This is the same methodology used to reduce COVID-19 anxiety among individuals. This study verifies the learning effect concerning the basic knowledge of radiation necessary for case consultation by emergency distance learning. Sixty third-year nursing college students agreed to participate in this research. The knowledge tests conducted before and after classes were compared, with the chi-square test used for testing. There were five knowledge questions regarding distance lessons. This was considered to be 5% significant. The students’ reports which describe the results of responding to health consultations, were analyzed qualitatively and descriptively. In this case study, a person living in an area not affected by radiation was anxious about drinking water and, thus, consulted with a student. The contents of the lecture were selected the minimum amount of knowledge used for the answers of the consultant; specifically hot spots, internal exposure risk, food safety, characteristics of cesium-137, and precautions for counselors. Before taking the class, the most correctly answered question by students concerned daily behavior at risk of internal exposure (52.2%). The question with the fewest correct answers was the selection of places that are likely to be hot spots (3.4%). All responses increased significantly after taking the class (p < 0.001). The answers to the counselors, as written by the students, were 'Cesium is strongly bound to the soil, so it is difficult to transfer to water' and 'Water quality test results of tap water are posted on the city's website.' These were concrete answers obtained by using specialized knowledge. Even in emergency distance learning, the students gained basic knowledge regarding radiation and created a document to utilize said knowledge while assuming the situation concretely. It was thought that the flipped classroom method, even if conducted remotely, could maintain students' learning. It was thought that setting specific knowledge and scenes to be used would enhance the learning effect. By changing the case to concern that of the anxiety caused by infectious diseases, students may be able to effectively gain the basic knowledge to decrease the anxiety of residents due to infectious diseases.

Keywords: effect of class, emergency distance learning, nursing student, radiation

Procedia PDF Downloads 113
1334 The Use of Rotigotine to Improve Hemispatial Neglect in Stroke Patients at the Charing Cross Neurorehabilitation Unit

Authors: Malab Sana Balouch, Meenakshi Nayar

Abstract:

Hemispatial Neglect is a common disorder primarily associated with right hemispheric stroke, in the acute phase of which it can occur up to 82% of the time. Such individuals fail to acknowledge or respond to people and objects in their left field of vision due to deficits in attention and awareness. Persistent hemispatial neglect significantly impedes post-stroke recovery, leading to longer hospital stays post-stroke, increased functional dependency, longer-term disability in ADLs and increased risk of falls. Recently, evidence has emerged for the use of dopamine agonist Rotigotine in neglect. The aim of our Quality Improvement Project (QIP) is to evaluate and better the current protocols and practice in assessment, documentation and management of neglect and rotigotine use at the Neurorehabilitation unit at Charing Cross Hospital (CNRU). In addition, it brings light to rotigotine use in the management of hemispatial neglect and paves the way for future research in the field. Our QIP was based in the CNRU. All patients admitted to the CNRU suffering from a right-sided stroke from 2nd of February 2018 to the 2nd of February 2021 were included in the project. Each patient’s multidisciplinary team report and hospital notes were searched for information, including bio-data, fulfilment of the inclusion criteria (having hemispatial neglect) and data related to rotigotine use. This includes whether or not the drug was administered, any contraindications to drug in patients that did not receive it, and any therapeutic benefits(subjective or objective improvement in neglect) in those that did receive the rotigotine. Data was simultaneously entered into excel sheet and further statistical analysis was done on SPSS 20.0. Out of 80 patients suffering from right sided strokes, 72.5% were infarcts and 27.5% were hemorrhagic strokes, with vast majority of both types of strokes were in the middle cerebral artery territory (MCA). A total of 31 (38.8%) of our patients were noted to have hemispatial neglect, with the highest number of cases being associated with MCA strokes. Almost half of our patients with MCA strokes suffered from neglect. Neglect was more common in male patients. Out of the 31 patients suffering from visuospatial neglect, only 16% actually received rotigotine and 80% of them were noted to have an objective improvement in their neglect tests and 20% revealed subjective improvement. After thoroughly going through neglect-associated documentation, the following recommendations/plans were put in place for the future. We plan to liaise with the occupational therapy team at our rehab unit to set a battery of tests that would be done on all patients presenting with neglect and recommend clear documentation of outcomes of each neglect screen under it. Also to create two proformas; one for the therapy team to aid in systematic documentation of neglect screens done prior to and after rotigotine administration and a second proforma for the medical team with clear documentation of rotigotine use, its benefits and any contraindications if not administered.

Keywords: hemispatial Neglect, right hemispheric stroke, rotigotine, neglect, dopamine agonist

Procedia PDF Downloads 72
1333 Role of Psychological Capital in Organizational and Personal Outcomes: An Exploratory Study of Medical Professionals in Pakistan

Authors: Shazia Almas, Jaffar Iqbal, Nazia Almas

Abstract:

In most of the South Asian countries like Pakistan medical profession is one the most valued and respectful professions yet being a medical professional requires an enormous amount of responsibilities and work overload at the same time which possibly can be in contrast with family role of a doctor. Job and family are two primary spheres of a person's life no matter whatever the profession one adopts and the type of family one is running. There is a bi-directional relationship between job and family. The type and nature of work, time schedules, working shifts in medical profession are very demanding in the countries like Pakistan where number of patients is far more higher than the number of doctors available. The work life also have significant impact on family life and vice versa. Because of the sensitivity and interdependency of these relations, today’s overarching and competing demands remain dissatisfactory. The main objective of the current research is to investigate how interpersonal relationships affect work and work affects interpersonal relationships of medical professionals. In line with identifying these facts, the current study aimed to examine the predictive role of psychological capital (self-efficacy, hope, optimism, and resilience), in organizational outcome (job satisfaction) and personal outcome (family satisfaction) amongst male and medical professionals. A total of 350 participants from public and private sector hospitals of Pakistan were recruited through simple random and stratified sampling techniques, with age ranges from 26-50 years. The questionnaire including established and certified self-report measures of Psychological Capital Questionnaire, Job Satisfaction, and Family Satisfaction were adopted to collect the data. The reliability and validity of mentioned instruments were established through Cronbach’s alpha and factor analyses (exploratory and confirmatory) respectively using Structural Equation Modeling (SEM) by AMOS. The proposed hypotheses were tested using Pearson’s Correlation and Regression analyses for predicting effect whereas, t-Test was deployed to verify the difference between male and female health professionals. The results revealed that self-efficacy and optimism predicted job satisfaction while, self-efficacy, hope, and resilience predicted family satisfaction. Moreover, the results depicted significant gender differences in job satisfaction where females were higher on job satisfaction as compared to male medical professionals but no significant differences were observed in levels of family satisfaction between both genders. The study has implications for social, organizational and work policy designers. The study also paves for more researches with positive psychological approach to promote work-family harmony.

Keywords: family satisfaction, job satisfaction, medical professionals, psychological capital

Procedia PDF Downloads 249
1332 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 356