Search results for: blood flow restriction training
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10686

Search results for: blood flow restriction training

906 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education

Authors: Ray Bryant

Abstract:

3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.

Keywords: AI, technology, human trafficking, prevention

Procedia PDF Downloads 47
905 Hypothalamic Para-Ventricular and Supra-Optic Nucleus Histo-Morphological Alterations in the Streptozotocin-Diabetic Gerbils (Gerbillus Gerbillus)

Authors: Soumia Hammadi, Imane Nouacer, Lamine Hamida, Younes A. Hammadi, Rachid Chaibi

Abstract:

Aims and objective: In the present work, we investigate the impact of both acute and chronic diabetes mellitus induced by streptozotocin (STZ) on the hypothalamus of the small gerbil (Gerbillus gerbillus). In this purpose, we aimed to study the histologic structure of the gerbil’s hypothalamic supraoptic (NSO) and paraventricular nucleus (NPV) at two distinct time points: two days and 30 days after diabetes onset. Methods: We conducted our investigation using 19 adult male gerbils weighing 25 to 28 g, divided into three groups as follow: Group I: Control gerbils (n=6) received an intraperitoneal injection of citrate buffer. Group II: STZ-diabetic gerbils (n=8) received a single intraperitoneal injection of STZ at a dose of 165 mg/kg of body weight. Diabetes onset (D0) is considered with the first hyperglycemia level exceeding 2,5 g/L. This group was further divided into two subgroups: Group II-1: Experimental Gerbils, at acute state of diabetes (n=8) sacrificed after 02 days of diabetes onset, Group II-2: Experimental Gerbils at chronic state of diabetes (n=7) sacrificed after 30 days of diabetes onset. Two and 30 days after diabetes onset, gerbils had blood drawn from the retro-orbital sinus into EDTA tubes. After centrifugation at -4°C, plasma was frozen at -80°C for later measurement of Cortisol, ACTH, and insulin. Afterward, animals were decapitated; their brain was removed, weighed, fixed in aqueous bouin, and processed and stained with Toluidine Bleu stain for histo-stereological analysis. A comparison was done with control gerbils treated with citrate buffer. Results: Compared to control gerbils, at 02 Days post diabetes onset, the neuronal somata of the paraventricular (NPV) and supraoptic nuclei (NSO) expressed numerous vacuoles of various sizes, we distinct also a neuronal juxtaposition and several unidentifiable vacuolated profiles were also seen in the neuropile. At the same time, we revealed the presence of à shrunken and condensed nuclei, which seem to touch the parvocellular neurons ( NPV); this leads us to suggest the presence of an apoptotic process in the early stage of diabetes. At 30 days of diabetes mellitus, the NPV manifests a few neurons with a distant appearance, in addition the magnocellular neurons in both NPV and NSO were hypertrophied with a rich euchromatin nucleus, a well-defined nucleolus, and a granular cytoplasm. Despite the neuronal degeneration at this stage, unexpectedly, ACTH registers a continuous significant high level compared to the early stage of diabetes mellitus and to control gerbils. Conclusion: The results suggest that the induction of diabetes mellitus using STZ in the small gerbils lead to alterations in the structure and morphology of the hypothalamus and hyper-secretion of ACTH and cortisol, possibly indicating hyperactivity of the hypothalamo-pituitary adrenal axis (HPA) during both the early and later stages of the disease. The subsequent quantitative evaluation of CRH, immunehistochemical evaluation of apoptosis, and oxidative stress assessment could corroborate our results.

Keywords: diabetes type 1., streptozotocin., small gerbil., hypothalamus., paraventricular nucleus., supraoptic nucleus.

Procedia PDF Downloads 56
904 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 146
903 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 43
902 Assessing Solid Waste Management Practices in Port Harcourt City, Nigeria

Authors: Perpetual Onyejelem, Kenichi Matsui

Abstract:

Solid waste management is one essential area for urban administration to achieve environmental sustainability. Proper solid waste management (SWM) improves the environment by reducing diseases and increasing public health. On the other way, improper SWM practices negatively impact public health and environmental sustainability. This article evaluates SWM in Port Harcourt, Nigeria, with the goal of determining the current solid waste management practices and their health implications. This study used secondary data, which relies on existing published literature and official documents. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement and its four-stage inclusion/exclusion criteria were utilized as part of a systematic literature review technique to locate the literature that concerns SWM practices and the implementation of solid waste management policies between 2014-2023 in PortHarcourt and its health effects from specific databases (Scopus and Google Scholar). The results found that despite the existence and implementation of the Rivers State Waste Management Policy and the formulation of the National Policy on Solid Waste Management in Port Harcourt, residents continued to dump waste in drainages. They were unaware of waste sorting and dumped waste haphazardly. This trend has persisted due to a lack of political commitment to the effective implementation and monitoring of policies and strategies and a lack of training provided to waste collectors regarding the SWM approach, which involves sorting and separating waste. In addition, inadequate remuneration for waste collectors, the absence of community participation in policy formulation, and insufficient awareness among residents regarding the 3R approach are also contributory factors. This caused the emergence of vector-borne diseases such as malaria, lassa fever, and cholera in Port Harcourt, increasing the expense of healthcare for locals, particularly low-income households. The study urges the government to prioritize protecting the health of its citizens by studying the methods other nations have taken to address the problem of solid waste management and adopting those that work best for their region. The bottom-up strategy should be used to include locals in developing solutions. However, citizens who are always the most impacted by this issue should launch initiatives to address it and put pressure on the government to assist them when they have limitations.

Keywords: health effects, solid waste management practices, environmental pollution, Port-Harcourt

Procedia PDF Downloads 45
901 Comparative Morphometric Analysis of Ambardi and Mangari Watersheds of Kadvi and Kasari River Sub-Basins in Kolhapur District, Maharashtra, India: Using Geographical Information System (GIS)

Authors: Chandrakant Gurav, Md. Babar

Abstract:

In the present study, an attempt is made to delineate the comparative morphometric analysis of Ambardi and Mangari watersheds of Kadvi and Kasari rivers sub-basins, Kolhapur District, Maharashtra India, using Geographical Information System (GIS) techniques. GIS is a computer assisted information method to store, analyze and display spatial data. Both the watersheds originate from Masai plateau of Jotiba- Panhala Hill range in Panhala Taluka of Kolhapur district. Ambardi watersheds cover 42.31 Sq. km. area and occur in northern hill slope, whereas Mangari watershed covers 54.63 Sq. km. area and occur on southern hill slope. Geologically, the entire study area is covered by Deccan Basaltic Province (DBP) of late Cretaceous to early Eocene age. Laterites belonging to late Pleistocene age also occur in the top of the hills. The objective of the present study is to carry out the morphometric parameters of watersheds, which occurs in differing slopes of the hill. Morphometric analysis of Ambardi watershed indicates it is of 4th order stream and Mangari watershed is of 5th order stream. Average bifurcation ratio of both watersheds is 5.4 and 4.0 showing that in both the watersheds streams flow from homogeneous nature of lithology and there is no structural controlled in development of the watersheds. Drainage density of Ambardi and Mangari watersheds is 3.45 km/km2 and 3.81 km/km2 respectively, and Stream Frequency is 4.51 streams/ km2 and 5.97 streams/ km2, it indicates that high drainage density and high stream frequency is governed by steep slope and low infiltration rate of the area for groundwater recharge. Textural ratio of both the watersheds is 6.6 km-1 and 9.6 km-1, which indicates that the drainage texture is fine to very fine. Form factor, circularity ratio and elongation ratios of the Ambardi and Mangari watersheds shows that both the watersheds are elongated in shape. The basin relief of Ambardi watershed is 447 m, while Mangari is 456 m. Relief ratio of Ambardi is 0.0428 and Mangari is 0.040. The ruggedness number of Ambardi is 1.542 and Mangari watershed is 1.737. The ruggedness number of both the watersheds is high which indicates the relief and drainage density is high.

Keywords: Ambardi, Deccan basalt, GIS, morphometry, Mangari, watershed

Procedia PDF Downloads 287
900 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis

Authors: Tefera Kebede Leyu

Abstract:

The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.

Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step

Procedia PDF Downloads 52
899 The Effect of Physical Guidance on Learning a Tracking Task in Children with Cerebral Palsy

Authors: Elham Azimzadeh, Hamidollah Hassanlouei, Hadi Nobari, Georgian Badicu, Jorge Pérez-Gómez, Luca Paolo Ardigò

Abstract:

Children with cerebral palsy (CP) have weak physical abilities and their limitations may have an effect on performing everyday motor activities. One of the most important and common debilitating factors in CP is the malfunction in the upper extremities to perform motor skills and there is strong evidence that task-specific training may lead to improve general upper limb function among this population. However, augmented feedback enhances the acquisition and learning of a motor task. Practice conditions may alter the difficulty, e.g., the reduced frequency of PG could be more challenging for this population to learn a motor task. So, the purpose of this study was to investigate the effect of physical guidance (PG) on learning a tracking task in children with cerebral palsy (CP). Twenty-five independently ambulant children with spastic hemiplegic CP aged 7-15 years were assigned randomly to five groups. After the pre-test, experimental groups participated in an intervention for eight sessions, 12 trials during each session. The 0% PG group received no PG; the 25% PG group received PG for three trials; the 50% PG group received PG for six trials; the 75% PG group received PG for nine trials; and the 100% PG group, received PG for all 12 trials. PG consisted of placing the experimenter's hand around the children's hand, guiding them to stay on track and complete the task. Learning was inferred by acquisition and delayed retention tests. The tests involved two blocks of 12 trials of the tracking task without any PG being performed by all participants. They were asked to make the movement as accurate as possible (i.e., fewer errors) and the number of total touches (errors) in 24 trials was calculated as the scores of the tests. The results showed that the higher frequency of PG led to more accurate performance during the practice phase. However, the group that received 75% PG had significantly better performance compared to the other groups in the retention phase. It is concluded that the optimal frequency of PG played a critical role in learning a tracking task in children with CP and likely this population may benefit from an optimal level of PG to get the appropriate amount of information confirming the challenge point framework (CPF), which state that too much or too little information will retard learning a motor skill. Therefore, an optimum level of PG may help these children to identify appropriate patterns of motor skill using extrinsic information they receive through PG and improve learning by activating the intrinsic feedback mechanisms.

Keywords: cerebral palsy, challenge point framework, motor learning, physical guidance, tracking task

Procedia PDF Downloads 56
898 Improving Engagement: Dental Veneers, a Qualitative Analysis of Posts on Instagram

Authors: Matthew Sedgwick

Abstract:

Introduction: Social media continues to grow in popularity and Instagram is one of the largest platforms available. It provides an invaluable method of communication between health care professionals and patients. Both patients and dentists can benefit from seeing clinical cases posted by other members of the profession. It can prompt discussion about how the outcome was achieved and showcases what is possible with the right techniques and planning. This study aimed to identify what people were posting about the topic ‘veneers’ and inform health care professionals as to what content had the most engagement and make recommendations as to how to improve the quality of social media posts. Design: 150 consecutive posts for the search term ‘veneers’ were analyzed retrospectively between 21st October 2021 to 31st October 2021. Non-English language posts duplicated posts, and posts not about dental veneers were excluded. After exclusions were applied, 80 posts were included in the study for analysis. The content of the posts was analyzed and coded and the main themes were identified. The number of comments, likes and views were also recorded for each post. Results: The themes were: before and after treatment, cost, dental training courses, treatment process and trial smiles. Dentists were the most common posters of content (82.5%) and it was interesting to note that there were no patients who posted about treatment in this sample. The main type of media was photographs (93.75%) compared to video (6.25%). Videos had an average of 45,541 views and more comments and likes than the average for photographs. The average number of comments and likes per post were 20.88 and 761.58, respectively. Conclusion: Before and after photographs were the most common finding as this is how dentists showcase their work. The study showed that videos showing the treatment process had more engagement than photographs. Dentists should consider making video posts showing the patient journey, including before and after veneer treatment, as this can result in more potential patients and colleagues viewing the content. Video content could help dentists distinguish their posts from others as it can also be used across other platforms such as TikTok or Facebook reaching a wider audience. More informative posts about how the result has shown are achieved required, including potential costs. This will help increase transparency regarding this treatment method, including the financial and potential biological cost to teeth. As a result, this will improve patient understanding and become an invaluable adjunct in informed consent.

Keywords: content analysis, dental veneers, Instagram, social media

Procedia PDF Downloads 123
897 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious

Authors: R. Dimitri Halley

Abstract:

The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.

Keywords: belief, shared dreaming, synchronistic patterns, wave state of self

Procedia PDF Downloads 177
896 Community Engagement in Child Centered Space at Disaster Events: A Case Story of Sri Lanka

Authors: Wasantha Pushpakumara Hitihami Mudiyanselage

Abstract:

Since recent past, Sri Lanka is highly vulnerable to reoccurring climate shocks that severely impact the food security, loss of human & animal lives, destructions of human settlements, displacement of people and damaging properties. Hence, the Government of Sri Lanka has taken important steps towards strengthening legal and institutional arrangements for Disaster Risks management in the country in May 2005. Puttalam administrative district is one of the disaster prone districts in Sri Lanka which constantly face the devastating consequences of the increasing natural disasters annually. Therefore disaster risk management will be a timely intervention in the area to minimize the adverse impacts of the disasters. The few functioning Disaster Risk management networks do not take children’s specific needs and vulnerabilities during emergencies into account. The most affected children and their families were evacuated to the government schools and temples and it was observed that children were left to roaming around as their parents were busy queuing up for relief goods and other priorities. In this sense, VOICE understands that the community has vital role that has to be played in facing challenges of disaster management in the area. During and after the disaster, it was viewed that some children were having psychological disorders which could be impacted negatively to children well–being. Need of child friendly space at emergency is a must action in the area to turn away negative impact coming from the hazards. VOICE with the support of national & international communities have established safer places for the children (Child Centered Spaces – CCS) and their families at emergencies. Village religious venues and schools were selected and equipped with necessary materials to be used for the children at emergency. Materials such as tools, stationeries, play materials, which couldn’t be easily found in surrounding environment, were provided for CCS centers. Village animators, youth and elders were given comprehensive training on Disaster management and their role at CCS. They did the facilitation in keeping children without fear and stress at flooding occurred in 2015 as well as they were able to improve their skills when working with children. Flooding in 2016, the government agencies have taken service of these village animators at early stage of flooding to make all disaster-related recovery actions productively & efficiently. This mechanism is sustained at village level that can be used for disaster events.

Keywords: child centered space, impacts, psychological disorders, village animators

Procedia PDF Downloads 117
895 Risks of Investment in the Development of Its Personnel

Authors: Oksana Domkina

Abstract:

According to the modern economic theory, human capital became one of the main production factors and the most promising direction of investment, as such investment provides opportunity of obtaining high and long-term economic and social effects. Informational technology (IT) sector is the representative of this new economy which is most dependent on human capital as the main competitive factor. So the question for this sector is not whether investment in development of personal should be made, but what are the most effective ways of executing it and who has to pay for the education: Worker, company or government. In this paper we examine the IT sector, describe the labor market of IT workers and its development, and analyze the risks that IT companies may face if they invest in the development of their workers and what factors influence it. The main problem and difficulty of quantitative estimation of risk of investment in human capital of a company and its forecasting is human factor. Human behavior is often unpredictable and complex, so it requires specific approaches and methods of assessment. To build a comprehensive method of estimation of the risk of investment in human capital of a company considering human factor, we decided to use the method of analytic hierarchy process (AHP), that initially was created and developed. We separated three main group of factors: Risks related to the worker, related to the company, and external factors. To receive data for our research, we conducted a survey among the HR departments of Ukrainian IT companies used them as experts for the AHP method. Received results showed that IT companies mostly invest in the development of their workers, although several hire only already qualified personnel. According to the results, the most significant risks are the risk of ineffective training and the risk of non-investment that are both related to the firm. The analysis of risk factors related to the employee showed that, the factors of personal reasons, motivation, and work performance have almost the same weights of importance. Regarding internal factors of the company, there is a high role of the factor of compensation and benefits, factors of interesting projects, team, and career opportunities. As for the external environment, one of the most dangerous factor of risk is competitor activities, meanwhile the political and economical situation factor also has a relatively high weight, which is easy to explain by the influence of severe crisis in Ukraine during 2014-2015. The presented method allows to take into consideration all main factors that affect the risk of investment in human capital of a company. This gives a base for further research in this field and allows for a creation of a practical framework for making decisions regarding the personnel development strategy and specific employees' development plans for the HR departments.

Keywords: risks, personnel development, investment in development, factors of risk, risk of investment in development, IT, analytic hierarchy process, AHP

Procedia PDF Downloads 286
894 Clinicians' and Nurses' Documentation Practices in Palliative and Hospice Care: A Mixed Methods Study Providing Evidence for Quality Improvement at Mobile Hospice Mbarara, Uganda

Authors: G. Natuhwera, M. Rabwoni, P. Ellis, A. Merriman

Abstract:

Aims: Health workers are likely to document patients’ care inaccurately, especially when using new and revised case tools, and this could negatively impact patient care. This study set out to; (1) assess nurses’ and clinicians’ documentation practices when using a new patients’ continuation case sheet (PCCS) and (2) explore nurses’ and clinicians’ experiences regarding documentation of patients’ information in the new PCCS. The purpose of introducing the PCCS was to improve continuity of care for patients attending clinics at which they were unlikely to see the same clinician or nurse consistently. Methods: This was a mixed methods study. The cross-sectional inquiry retrospectively reviewed 100 case notes of active patients on hospice and palliative care program. Data was collected using a structured questionnaire with constructs formulated from the new PCCS under study. The qualitative element was face-to-face audio-recorded, open-ended interviews with a purposive sample of one palliative care clinician, and four palliative care nurse specialists. Thematic analysis was used. Results: Missing patients’ biogeographic information was prevalent at 5-10%. Spiritual and psychosocial issues were not documented in 42.6%, and vital signs in 49.2%. Poorest documentation practices were observed in past medical history part of the PCCS at 40-63%. Four themes emerged from interviews with clinicians and nurses-; (1) what remains unclear and challenges, (2) comparing the past with the present, (3) experiential thoughts, and (4) transition and adapting to change. Conclusions: The PCCS seems to be a comprehensive and simple tool to be used to document patients’ information at subsequent visits. The comprehensiveness and utility of the PCCS does paper to be limited by the failure to train staff in its use prior to introducing. The authors find the PCCS comprehensive and suitable to capture patients’ information and recommend it can be adopted and used in other palliative and hospice care settings, if suitable introductory training accompanies its introduction. Otherwise, the reliability and validity of patients’ information collected by this PCCS can be significantly reduced if some sections therein are unclear to the clinicians/nurses. The study identified clinicians- and nurses-related pitfalls in documentation of patients’ care. Clinicians and nurses need to prioritize accurate and complete documentation of patient care in the PCCS for quality care provision. This study should be extended to other sites using similar tools to ensure representative and generalizable findings.

Keywords: documentation, information case sheet, palliative care, quality improvement

Procedia PDF Downloads 128
893 Responsibility of States in Air Traffic Management: Need for International Unification

Authors: Nandini Paliwal

Abstract:

Since aviation industry is one of the fastest growing sectors of the world economy, states depend on the air transport industry to maintain or stimulate economic growth. It significantly promotes and contributes to the economic well-being of every nation as well as world in general. Because of the continuous and rapid growth in civil aviation, it is inevitably leading to congested skies, flight delays and most alarmingly, a decrease in the safety of air navigation facilities. Safety is one of the most important concerns of aviation industry that has been unanimously recognised across the whole world. The available capacity of the air navigation system is not sufficient for the demand that is being generated. It has been indicated by forecast that the current growth in air traffic has the potential of causing delays in 20% of flights by 2020 unless changes are brought in the current system. Therefore, a safe, orderly and expeditious air navigation system is needed at the national and global levels, which, requires the implementation of an air traffic management (hereinafter referred as ‘ATM’) system to ensure an optimum flow of air traffic by utilising and enhancing capabilities provided by technical advances. The objective of this paper is to analyse the applicability of national regulations in case of liability arising out of air traffic management services and whether the current legal regime is sufficient to cover multilateral agreements including the Single European Sky regulations. In doing so, the paper will examine the international framework mainly the Article 28 of the Chicago Convention and its relevant annexes to determine the responsibility of states for providing air navigation services. Then, the paper will discuss the difference between the concept of responsibility and liability under the air law regime and how states might claim sovereign immunity for the functions of air traffic management. Thereafter, the paper will focus on the cross border agreements including the bilateral and multilateral agreements. In the end, the paper will address the scheme of Single European Sky and the need for an international convention dealing with the liability of air navigation service providers. The paper will conclude with some suggestions for unification of the laws at an international level dealing with liability of air navigation service providers and the requirement of enhanced co-operation among states in order to keep pace with technological advances.

Keywords: air traffic management, safety, single European sky, co-operation

Procedia PDF Downloads 152
892 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration

Authors: Mohammad Reza Esmaili

Abstract:

One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.

Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto

Procedia PDF Downloads 46
891 Influence of Morphology and Coatings in the Tribological Behavior of a Texturised Deterministic Surface by Photochemical Machining

Authors: Juan C. Sanchez, Jose L. Endrino, Alejandro Toro, Hugo A. Estupinan, Glenn Leighton

Abstract:

For years, the reduction of friction and wear has been a matter of interest in the engineering field. Several solutions have been proposed to address this issue, including the use of lubricants and coatings to reduce the frictional forces and to increase the surface wear resistance. Alternatively, texturing processes have been used in a wide variety of materials, in many cases inspired in natural surfaces. Nature has shown how species adapt to the environment and the engineers try to understand natural surfaces for particular applications by analyzing outstanding species such as gecko for high adhesion, lotus leaves for hydrophobicity, sharks for reduced flow resistance and snakes for optimized frictional response. Texturized surfaces have shown a superior performance in terms of the frictional response in many situations, and the control of its behavior greatly depends on the manufacturing process. The focus of this work is to evaluate the tribological behavior of AISI 52100 steel samples texturized by Photochemical Machining (PCM). The surface texture was inspired by several features of the snakeskin such as aspect ratio of fibrils and mean fibril spacing. Two coatings were applied on the texturized surface, namely Diamond-like Carbon (DLC) and Molybdenum Disulphide (MoS₂), and their tribological behavior after pin-on-disk tests were compared with that of the non-texturized and uncovered surfaces. The samples were characterised through Stereoscopic Microscope (SM), Scanning Electron Microscope (SEM), Optical Microscope (OM), Profilometer, Raman Spectrometer (RS) and X-Ray Diffractometer (XRD). The Coefficient of Friction (COF) measured in pin-on-disk tests showed correlations with the sliding direction (relative to the texture features) and the aspect ratio of the texture features. Regarding the coated surfaces, the DLC and MoS₂ coating had a good performance in terms of wear rate and coefficient of friction compared with the uncoated and non-texturized surfaces. On the other hand, for the uncoated surfaces, the texture showed an influence in the tribological performance with respect to the non-texturized surface.

Keywords: coating, coefficient of friction, deterministic surface, photochemical machining

Procedia PDF Downloads 132
890 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 79
889 Ultra-deformable Drug-free Sequessome™ Vesicles (TDT 064) for the Treatment of Joint Pain Following Exercise: A Case Report and Clinical Data

Authors: Joe Collins, Matthias Rother

Abstract:

Background: Oral non-steroidal anti-inflammatory drugs (NSAIDs) are widely used for the relief of joint pain during and post-exercise. However, oral NSAIDs increase the risk of systemic side effects, even in healthy individuals, and retard recovery from muscle soreness. TDT 064 (Flexiseq®), a topical formulation containing ultra-deformable drug-free Sequessome™ vesicles, has demonstrated equivalent efficacy to oral celecoxib in reducing osteoarthritis-associated joint pain and stiffness. TDT 064 does not cause NSAID-related adverse effects. We describe clinical study data and a case report on the effectiveness of TDT 064 in reducing joint pain after exercise. Methods: Participants with a pain score ≥3 (10-point scale) 12–16 hours post-exercise were randomized to receive TDT 064 plus oral placebo, TDT 064 plus oral ketoprofen, or ketoprofen in ultra-deformable phospholipid vesicles plus oral placebo. Results: In the 168 study participants, pain scores were significantly higher with oral ketoprofen plus TDT 064 than with TDT 064 plus placebo in the 7 days post-exercise (P = 0.0240) and recovery from muscle soreness was significantly longer (P = 0.0262). There was a low incidence of adverse events. These data are supported by clinical experience. A 24-year-old male professional rugby player suffered a traumatic lisfranc fracture in March 2014 and underwent operative reconstruction. He had no relevant medical history and was not receiving concomitant medications. He had undergone anterior cruciate ligament reconstruction in 2008. The patient reported restricted training due to pain (score 7/10), stiffness (score 9/10) and poor function, as well as pain when changing direction and running on consecutive days. In July 2014 he started using TDT 064 twice daily at the recommended dose. In November 2014 he noted reduced pain on running (score 2-3/10), decreased morning stiffness (score 4/10) and improved joint mobility and was able to return to competitive rugby without restrictions. No side effects of TDT 064 were reported. Conclusions: TDT 064 shows efficacy against exercise- and injury-induced joint pain, as well as that associated with osteoarthritis. It does not retard muscle soreness recovery after exercise compared with an oral NSAID, making it an alternative approach for the treatment of joint pain during and post-exercise.

Keywords: exercise, joint pain, TDT 064, phospholipid vesicles

Procedia PDF Downloads 470
888 Development of a Wound Dressing Material Based on Microbial Polyhydroxybutyrate Electrospun Microfibers Containing Curcumin

Authors: Ariel Vilchez, Francisca Acevedo, Rodrigo Navia

Abstract:

The wound healing process can be accelerated and improved by the action of antioxidants such as curcumin (Cur) over the tissues; however, the efficacy of curcumin used through the digestive system is not enough to exploit its benefits. Electrospinning presents an alternative to carry curcumin directly to the wounds, and polyhydroxybutyrate (PHB) is proposed as the matrix to load curcumin owing to its biodegradable and biocompatible properties. PHB is among 150 types of Polyhydroxyalkanoates (PHAs) identified, it is a natural thermoplastic polyester produced by microbial fermentation obtained from microorganisms. The proposed objective is to develop electrospun bacterial PHB-based microfibers containing curcumin for possible biomedical applications. Commercial PHB was solved in Chloroform: Dimethylformamide (4:1) to a final concentration of 7% m/V. Curcumin was added to the polymeric solution at 1%, and 7% m/m regarding PHB. The electrospinning equipment (NEU-BM, China) with a rotary collector was used to obtain Cur-PHB fibers at different voltages and flow rate of the polymeric solution considering a distance of 20 cm from the needle to the collector. Scanning electron microscopy (SEM) was used to determine the diameter and morphology of the obtained fibers. Thermal stability was obtained from Thermogravimetric (TGA) analysis, and Fourier Transform Infrared Spectroscopy (FT-IR) was carried out in order to study the chemical bonds and interactions. A preliminary curcumin release to Phosphate Buffer Saline (PBS) pH = 7.4 was obtained in vitro and measured by spectrophotometry. PHB fibers presented an intact chemical composition regarding the original condition (dust) according to FTIR spectra, the diameter fluctuates between 0.761 ± 0.123 and 2.157 ± 0.882 μm, with different qualities according to their morphology. The best fibers in terms of quality and diameter resulted in sample 2 and sample 6, obtained at 0-10kV and 0.5 mL/hr, and 0-10kV and 1.5 mL/hr, respectively. The melting temperature resulted near 178 °C, according to the bibliography. The crystallinity of fibers decreases while curcumin concentration increases for the studied interval. The curcumin release reaches near 14% at 37 °C at 54h in PBS adjusted to a quasi-Fickian Diffusion. We conclude that it is possible to load curcumin in PHB to obtain continuous, homogeneous, and solvent-free microfibers by electrospinning. Between 0% and 7% of curcumin, the crystallinity of fibers decreases as the concentration of curcumin increases. Thus, curcumin enhances the flexibility of the obtained material. HPLC should be used in further analysis of curcumin release.

Keywords: antioxidant, curcumin, polyhydroxybutyrate, wound healing

Procedia PDF Downloads 112
887 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 152
886 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 287
885 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser

Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt

Abstract:

This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.

Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet

Procedia PDF Downloads 199
884 Electrospray Plume Characterisation of a Single Source Cone-Jet for Micro-Electronic Cooling

Authors: M. J. Gibbons, A. J. Robinson

Abstract:

Increasing expectations on small form factor electronics to be more compact while increasing performance has driven conventional cooling technologies to a thermal management threshold. An emerging solution to this problem is electrospray (ES) cooling. ES cooling enables two phase cooling by utilising Coulomb forces for energy efficient fluid atomization. Generated charged droplets are accelerated to the grounded target surface by the applied electric field and surrounding gravitational force. While in transit the like charged droplets enable plume dispersion and inhibit droplet coalescence. If the electric field is increased in the cone-jet regime, a subsequent increase in the plume spray angle has been shown. Droplet segregation in the spray plume has been observed, with primary droplets in the plume core and satellite droplets positioned on the periphery of the plume. This segregation is facilitated by inertial and electrostatic effects. This result has been corroborated by numerous authors. These satellite droplets are usually more densely charged and move at a lower relative velocity to that of the spray core due to the radial decay of the electric field. Previous experimental research by Gomez and Tang has shown that the number of droplets deposited on the periphery can be up to twice that of the spray core. This result has been substantiated by a numerical models derived by Wilhelm et al., Oh et al. and Yang et al. Yang et al. showed from their numerical model, that by varying the extractor potential the dispersion radius of the plume also varies proportionally. This research aims to investigate this dispersion density and the role it plays in the local heat transfer coefficient profile (h) of ES cooling. This will be carried out for different extractor – target separation heights (H2), working fluid flow rates (Q), and extractor applied potential (V2). The plume dispersion will be recorded by spraying a 25 µm thick, joule heated steel foil and by recording the thermal footprint of the ES plume using a Flir A-40 thermal imaging camera. The recorded results will then be analysed by in-house developed MATLAB code.

Keywords: electronic cooling, electrospray, electrospray plume dispersion, spray cooling

Procedia PDF Downloads 378
883 Permeable Reactive Pavement for Controlling the Transport of Benzene, Toluene, Ethyl-Benzene, and Xylene (BTEX) Contaminants

Authors: Shengyi Huang, Chenju Liang

Abstract:

Volatile organic compounds such as benzene, toluene, ethyl-benzene, and xylene (BTEX) are common contaminants in environment, which could come from asphalt concrete or exhaust emissions of vehicles. The BTEX may invade to the subsurface environment via wet and dry atmospheric depositions. If there aren’t available ways for controlling contaminants’ fate and transport, they would extensively harm natural environment. In the 1st phase of this study, various adsorbents were screened for a suitable one to be an additive in the porous asphalt mixture. In the 2nd phase, addition of the selected adsorbent was incorporated with the design of porous asphalt concrete (PAC) to produce the permeable reactive pavement (PRP), which was subsequently tested for the potential of adsorbing aqueous BTEX as compared to the PAC, in the 3rd phase. The PRP was prepared according to the following steps: firstly, the suitable adsorbent was chosen based on the analytical results of specific surface area analysis, thermal-gravimetric analysis, adsorption kinetics and isotherms, and thermal dynamics analysis; secondly, the materials of coarse aggregate, fine aggregate, filler, asphalt, and fiber were tested in order to meet regulated specifications (e.g., water adsorption, soundness, viscosity etc.) for preparing the PRP; thirdly, the amount of adsorbent additive was determined in the PRP; fourthly, the prepared PAC and PRP were examined for their physical properties (e.g., abrasion loss, drain-down loss, Marshall stability, Marshall flow, dynamic stability etc.). As a result of comparison between PRP and PAC, the PRP showed better physical performance than the traditional PAC. At last, the Marshall Specimen column tests were conducted to explore the adsorption capacities of PAC and PRPs. The BTEX adsorption capacities of PRPs are higher than those obtained from traditional PAC. In summary, PRPs showed superior physical performance and adsorption capacities, which exhibit the potential of PRP to be applied as a replacement of PAC for better controlling the transport of non-point source pollutants.

Keywords: porous asphalt concrete, volatile organic compounds, permeable reactive pavement, non-point source pollution

Procedia PDF Downloads 197
882 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 233
881 Economic Factors Affecting Greenfield Petroleum Refinery and Petrochemical Projects in Africa

Authors: Daniel Muwooya

Abstract:

This paper analyses economic factors that have affected the competitiveness of petroleum refinery and petrochemical projects in sub-Saharan Africa in the past and continue to plague greenfield projects today. Traditional factors like plant sizing and complexity, low-capacity utilization, changing regulatory environment, and tighter product specifications have been important in the past. Additional factors include the development of excess refinery capacity in Asia and the growth of renewable sources of energy – especially for transportation. These factors create both challenges and opportunities for the development of greenfield refineries and petrochemical projects in areas of increased demand growth and new low-cost crude oil production – like sub-Saharan Africa. This paper evaluates the strategies available to project developers and host countries to address contemporary issues of energy transition and the apparent reduction of funds available for greenfield oil and gas projects. The paper also evaluates the structuring of greenfield refinery and petrochemical projects for limited recourse project finance bankability. The methodology of this paper includes analysis of current industry data, conference proceedings, academic papers, and academic books on the subjects of petroleum refinery economics, refinery financing, refinery operations, and project finance generally and specifically in the oil and gas industry; evaluation of expert opinions from journal articles; working papers from international bodies like the World Bank and the International Energy Agency; and experience from playing an active role in the development and financing of US$ 10 Billion greenfield oil development project in Uganda. The paper also applies the discounted cash flow modelling to illustrate the circumstances of an inland greenfield refinery project in Uganda. Greenfield refinery and petrochemical projects are still necessary in sub-Saharan Africa to, among other aspirations, support the transition from traditional sources of energy like biomass to such modern forms as liquefied petroleum gas. Project developers and host governments will be required to structure projects that support global climate change goals without occasioning undue delays to project execution.

Keywords: financing, refinery and petrochemical economics, Africa, project finance

Procedia PDF Downloads 45
880 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 270
879 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 252
878 Toxicity of PPCPs on Adapted Sludge Community

Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton

Abstract:

Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.

Keywords: adapted sludge community, mixture, PPCPs, toxicity

Procedia PDF Downloads 387
877 Aerothermal Analysis of the Brazilian 14-X Hypersonic Aerospace Vehicle at Mach Number 7

Authors: Felipe J. Costa, João F. A. Martos, Ronaldo L. Cardoso, Israel S. Rêgo, Marco A. S. Minucci, Antonio C. Oliveira, Paulo G. P. Toro

Abstract:

The Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics, at the Institute for Advanced Studies designed the Brazilian 14-X Hypersonic Aerospace Vehicle, which is a technological demonstrator endowed with two innovative technologies: waverider technology, to obtain lift from conical shockwave during the hypersonic flight; and uses hypersonic airbreathing propulsion system called scramjet that is based on supersonic combustion, to perform flights on Earth's atmosphere at 30 km altitude at Mach numbers 7 and 10. The scramjet is an aeronautical engine without moving parts that promote compression and deceleration of freestream atmospheric air at the inlet through the conical/oblique shockwaves generated during the hypersonic flight. During high speed flight, the shock waves and the viscous forces yield the phenomenon called aerodynamic heating, where this physical meaning is the friction between the fluid filaments and the body or compression at the stagnation regions of the leading edge that converts the kinetic energy into heat within a thin layer of air which blankets the body. The temperature of this layer increases with the square of the speed. This high temperature is concentrated in the boundary-layer, where heat will flow readily from the boundary-layer to the hypersonic aerospace vehicle structure. Fay and Riddell and Eckert methods are applied to the stagnation point and to the flat plate segments in order to calculate the aerodynamic heating. On the understanding of the aerodynamic heating it is important to analyze the heat conduction transfer to the 14-X waverider internal structure. ANSYS Workbench software provides the Thermal Numerical Analysis, using Finite Element Method of the 14-X waverider unpowered scramjet at 30 km altitude at Mach number 7 and 10 in terms of temperature and heat flux. Finally, it is possible to verify if the internal temperature complies with the requirements for embedded systems, and, if is necessary to do modifications on the structure in terms of wall thickness and materials.

Keywords: aerodynamic heating, hypersonic, scramjet, thermal analysis

Procedia PDF Downloads 430