Search results for: three angle complex rotation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7047

Search results for: three angle complex rotation

477 Providing Support On-Time: Need to Establish De-Radicalization Hotlines

Authors: Ashir Ahmed

Abstract:

Peacekeeping is a collective responsibility of governments, law enforcement agencies, communities, families, and individuals. Moreover, the complex nature of peacekeeping activities requires a holistic and collaborative approach where various community sectors work together to form collective strategies that are likely to be more effective than strategies designed and delivered in isolation. Similarly, it is important to learn from past programs to evaluate the initiatives that have worked well and the areas that need further improvement. Review of recent peacekeeping initiatives suggests that there have been tremendous efforts and resources put in place to deal with the emerging threat of terrorism, radicalization and violent extremism through number of de-radicalization programs. Despite various attempts in designing and delivering successful programs for deradicalization, the threat of people being radicalized is growing more than ever before. This research reviews the prominent de-radicalization programs to draw an understanding of their strengths and weaknesses. Some of the weaknesses in the existing programs include. Inaccessibility: Limited resources, geographical location of potential participants (for offline programs), inaccessibility or inability to use various technologies (for online programs) makes it difficult for people to participate in de-radicalization programs. Timeliness: People might need to wait for a program on a set date/time to get the required information and to get their questions answered. This is particularly true for offline programs. Lack of trust: The privacy issues and lack of trust between participants and program organizers are another hurdle in the success of de-radicalization programs. The fear of sharing participants information with organizations (such as law enforcement agencies) without their consent led them not to participate in these programs. Generalizability: Majority of these programs are very generic in nature and do not cater the specific needs of an individual. Participants in these programs may feel that the contents are irrelevant to their individual situations and hence feel disconnected with purpose of the programs. To address the above-mentioned weaknesses, this research developed a framework that recommends some improvements in de-radicalization programs. One of the recommendations is to offer 24/7, secure, private and online hotline (also referred as helpline) for the people who have any question, concern or situation to discuss with someone who is qualified (a counsellor) to deal with people who are vulnerable to be radicalized. To make these hotline services viable and sustainable, the existing organizations offering support for depression, anxiety or suicidal ideation could additionally host these services. These helplines should be available via phone, the internet, social media and in-person. Since these services will be embedded within existing and well-known services, they would likely to get more visibility and promotion. The anonymous and secure conversation between a person and a counsellor would ensure that a person can discuss the issues without being afraid of information sharing with any third party – without his/her consent. The next stage of this project would include the operationalization of the framework by collaborating with other organizations to host de-radicalization hotlines and would assess the effectiveness of such initiatives.

Keywords: de-radicalization, framework, hotlines, peacekeeping

Procedia PDF Downloads 216
476 Contraceptives: Experiences of Agency and Coercion of Young People Living in Colombia

Authors: Paola Montenegro, Maria de los Angeles Balaguera Villa

Abstract:

Contraceptive methods play a fundamental role in preventing unwanted pregnancies and protecting users from sexually transmitted infections (STIs). Despite being known to almost the entire population of reproductive age living in Colombia, there are barriers, practices and complex notions about contraceptives that affect their desired mass use and effectiveness. This work aims to analyse some of the perceptions and practices discussed with young people (13-28 years old) living in Colombia regarding the use of contraceptives in their daily lives, preferences, needs and perceived side effects. This research also examines the perceived paradox in autonomy that young people experience regarding contraceptive use: in one hand, its use (or lack of it) is interpreted as an act of self-determination and primary example of reproductive agency, on the other hand, it was frequently associated with coercion and limited autonomy derived from the gaps in reliable information available for young people, the difficulty of accessing certain preferred methods, and sometimes the experienced coercion exercise by doctors, partners and/or family members. The data and analysis discussed in this work stems from a research project whose objective was to provide information about needs and preferences in sexual and reproductive health of young people living in Colombia in relation to a possible telehealth service that could close the gap in access to quality care and safe information. Through a mixed methods approach, this study collected 5.736 responses to a virtual survey disseminated nationwide in Colombia and 47 inperson interviews (24 of them with people who were assigned female at birth and 21 with local key stakeholders in the abortion ecosystem). Quantitative data was analyzed using Stata SE Version 16.0 and qualitative analysis was completed through NVivo using thematic analysis. Key findings on contraception use in young people living in Colombia reveal that 85,8% of participants had used a contraceptive method in the last two years, and that the most commonly used methods were condoms, contraceptive pills, the morning-after pill and the method of interruption. The remaining 14,2% of respondents who declared to not have used contraceptives in the last two years expressed that the main four barriers to access were: "Lack of knowledge about contraceptive methods and where to obtain information and/or access them (13.9%)", "Have had sex with people who have vaginas (10.2%)", "Cost of contraceptive method (8.4%)" and "Difficulties in obtaining medical authorisations (7.6%)". These barriers coincided with the ones used to explain the non-use of contraceptives in young people, which reveals that limitations in information, cost, and quality care represent structural issues that need to be address in programmes, services, and public policy. Finally, interviews showed that young people perceive contraceptive use and non-use as an example of reaffirming reproductive agency and limitations to this can be explained through the widespread incomplete knowledge about how methods work and the prevalence of other social representations of contraception associated with trust, fidelity, and partner preferences, that in the end create limitations to young people’s autonomy.

Keywords: contraception, family planning, premarital fertility, unplanned pregnancy

Procedia PDF Downloads 77
475 Developing Granular Sludge and Maintaining High Nitrite Accumulation for Anammox to Treat Municipal Wastewater High-efficiently in a Flexible Two-stage Process

Authors: Zhihao Peng, Qiong Zhang, Xiyao Li, Yongzhen Peng

Abstract:

Nowadays, conventional nitrogen removal process (nitrification and denitrification) was adopted in most wastewater treatment plants, but many problems have occurred, such as: high aeration energy consumption, extra carbon sources dosage and high sludge treatment costs. The emergence of anammox has bring about the great revolution to the nitrogen removal technology, and only the ammonia and nitrite were required to remove nitrogen autotrophically, no demand for aeration and sludge treatment. However, there existed many challenges in anammox applications: difficulty of biomass retention, insufficiency of nitrite substrate, damage from complex organic etc. Much effort was put into the research in overcoming the above challenges, and the payment was rewarded. It was also imperative to establish an innovative process that can settle the above problems synchronously, after all any obstacle above mentioned can cause the collapse of anammox system. Therefore, in this study, a two-stage process was established that the sequencing batch reactor (SBR) and upflow anaerobic sludge blanket (UASB) were used in the pre-stage and post-stage, respectively. The domestic wastewater entered into the SBR first and went through anaerobic/aerobic/anoxic (An/O/A) mode, and the draining at the aerobic end of SBR was mixed with domestic wastewater, the mixture then entering to the UASB. In the long term, organic and nitrogen removal performance was evaluated. All along the operation, most COD was removed in pre-stage (COD removal efficiency > 64.1%), including some macromolecular organic matter, like: tryptophan, tyrosinase and fulvic acid, which could weaken the damage of organic matter to anammox. And the An/O/A operating mode of SBR was beneficial to the achievement and maintenance of partial nitrification (PN). Hence, sufficient and steady nitrite supply was another favorable condition to anammox enhancement. Besides, the flexible mixing ratio helped to gain a substrate ratio appropriate to anammox (1.32-1.46), which further enhance the anammox. Further, the UASB was used and gas recirculation strategy was adopted in the post-stage, aiming to achieve granulation by the selection pressure. As expected, the granules formed rapidly during 38 days, which increased from 153.3 to 354.3 μm. Based on bioactivity and gene measurement, the anammox metabolism and abundance level rose evidently, by 2.35 mgN/gVss·h and 5.3 x109. The anammox bacteria mainly distributed in the large granules (>1000 μm), while the biomass in the flocs (<200 μm) and microgranules (200-500 μm) barely displayed anammox bioactivity. Enhanced anammox promoted the advanced autotrophic nitrogen removal, which increased from 71.9% to 93.4%, even when the temperature was only 12.9 ℃. Therefore, it was feasible to enhance anammox in the multiple favorable conditions created, and the strategy extended the application of anammox to the full-scale mainstream, enhanced the understanding of anammox in the aspects of culturing conditions.

Keywords: anammox, granules, nitrite accumulation, nitrogen removal efficiency

Procedia PDF Downloads 49
474 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
473 The Different Effects of Mindfulness-Based Relapse Prevention Group Therapy on QEEG Measures in Various Severity Substance Use Disorder Involuntary Clients

Authors: Yu-Chi Liao, Nai-Wen Guo, Chun‑Hung Lee, Yung-Chin Lu, Cheng-Hung Ko

Abstract:

Objective: The incidence of behavioral addictions, especially substance use disorders (SUDs), is gradually be taken seriously with various physical health problems. Mindfulness-based relapse prevention (MBRP) is a treatment option for promoting long-term health behavior change in recent years. MBRP is a structured protocol that integrates formal meditation practices with the cognitive-behavioral approach of relapse prevention treatment by teaching participants not to engage in reappraisal or savoring techniques. However, considering SUDs as a complex brain disease, questionnaires and symptom evaluation are not sufficient to evaluate the effect of MBRP. Neurophysiological biomarkers such as quantitative electroencephalogram (QEEG) may improve accurately represent the curative effects. This study attempted to find out the neurophysiological indicator of MBRP in various severity SUD involuntary clients. Participants and Methods: Thirteen participants (all males) completed 8-week mindfulness-based treatment provided by trained, licensed clinical psychologists. The behavioral data were from the Severity of Dependence Scale (SDS) and Negative Mood Regulation Scale (NMR) before and afterMBRP treatment. The QEEG data were simultaneously recorded with executive attention tasks, called comprehensive nonverbal attention test(CNAT). The two-way repeated-measures (treatment * severity) ANOVA and independent t-test were used for statistical analysis. Results: Thirteen participants regrouped into high substance dependence (HS) and low substance dependence (LS) by SDS cut-off. The HS group showed more SDS total score and lower gamma wave in the Go/No Go task of CNAT at pretest. Both groups showed the main effect that they had a lower frontal theta/beta ratio (TBR) during the simple reaction time task of CNAT. The main effect showed that the delay errors of CNAT were lower after MBRP. There was no other difference in CNAT between groups. However, after MBRP, compared to LS, the HS group have resonant progress in improving SDS and NMR scores. The neurophysiological index, the frontal TBR of the HS during the Go/No Go task of CNATdecreased than that of the LS group. Otherwise, the LS group’s gamma wave was a significant reduction on the Go/No Go task of CNAT. Conclusion: The QEEG data supports the MBRP can restore the prefrontal function of involuntary addicts and lower their errors in executive attention tasks. However, the improvement of MBRPfor the addict with high addiction severity is significantly more than that with low severity, including QEEG’s indicators and negative emotion regulation. Future directions include investigating the reasons for differences in efficacy among different severity of the addiction.

Keywords: mindfulness, involuntary clients, QEEG, emotion regulation

Procedia PDF Downloads 147
472 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 160
471 A Work-Individual-Family Inquiry on Mental Health and Family Responsibility of Dealers Employed in Macau Gaming Industry

Authors: Tak Mau Simon Chan

Abstract:

While there is growing reflection of the adverse impacts instigated by the flourishing gaming industry on the physical health and job satisfaction of those who work in Macau casinos, there is also a critical void in our understanding of the mental health of croupiers and how casino employment interacts with the family system. From a systemic approach, it would be most effective to examine the ‘dealer issues’ collectively and offer assistance to both the individual dealer and the family system of dealers. Therefore, with the use of a mixed method study design, the levels of anxiety, depression and sleeping quality of a sample of 1124 dealers who are working in Macau casinos have been measured in the present study, and 113 dealers have been interviewed about the impacts of casino employment on their family life. This study presents some very important findings. First, the quantitative study indicates that gender is a significant predictor of depression and anxiety levels, whilst lower income means less quality sleep. The Pearson’s correlation coefficients show that as the Zung Self-rating Anxiety Scale (ZSAS) scores increase, the Zung Self-rating Depression Scale (ZSDS) and Pittsburgh Sleep Quality Index (PSQI) scores will also simultaneously increase. Higher income, therefore, might partly explain for the reason why mothers choose to work in the gaming industry even with shift work involved and a stressful work environment. Second, the findings from the qualitative study show that aside from the positive impacts on family finances, the shift work and job stress to some degree negatively affect family responsibilities and relationships. There are resultant family issues, including missed family activities, and reduced parental care and guidance, marital intimacy, and communication with family members. Despite the mixed views on the gender role differences, the respondents generally agree that female dealers have more family and child-minding responsibilities at home, and thus it is more difficult for them to balance work and family. Consequently, they may be more vulnerable to stress at work. Thirdly, there are interrelationships between work and family, which are based on a systemic inquiry that incorporates work- individual- family. Poor physical and psychological health due to shift work or a harmful work environment could affect not just work performance, but also life at home. Therefore, a few practice points about 1) work-family conflicts in Macau; 2) families-in- transition in Macau; and 3) gender and class sensitivity in Macau; are provided for social workers and family practitioners who will greatly benefit these families, especially whose family members are working in the gaming industry in Macau. It is concluded that in addressing the cultural phenomenon of “dealer’s complex” in Macau, a systemic approach is recommended that addresses both personal psychological needs and family issue of dealers.

Keywords: family, work stress, mental health, Macau, dealers, gaming industry

Procedia PDF Downloads 305
470 Parenting Interventions for Refugee Families: A Systematic Scoping Review

Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai

Abstract:

Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.

Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families

Procedia PDF Downloads 165
469 Designing an Operational Control System for the Continuous Cycle of Industrial Technological Processes Using Fuzzy Logic

Authors: Teimuraz Manjapharashvili, Ketevani Manjaparashvili

Abstract:

Fuzzy logic is a modeling method for complex or ill-defined systems and is a relatively new mathematical approach. Its basis is to consider overlapping cases of parameter values and define operations to manipulate these cases. Fuzzy logic can successfully create operative automatic management or appropriate advisory systems. Fuzzy logic techniques in various operational control technologies have grown rapidly in the last few years. Fuzzy logic is used in many areas of human technological activity. In recent years, Fuzzy logic has proven its great potential, especially in the automation of industrial process control, where it allows the form of a control design based on the experience of experts and the results of experiments. The engineering of chemical technological processes uses fuzzy logic in optimal management, and it is also used in process control, including the operational control of continuous cycle chemical industrial, technological processes, where special features appear due to the continuous cycle and correct management acquires special importance. This paper discusses how intelligent systems can be developed, in particular, how Fuzzy logic can be used to build knowledge-based expert systems in chemical process engineering. The implemented projects reveal that the use of Fuzzy logic in technological process control has already given us better solutions than standard control techniques. Fuzzy logic makes it possible to develop an advisory system for decision-making based on the historical experience of the managing operator and experienced experts. The present paper deals with operational control and management systems of continuous cycle chemical technological processes, including advisory systems. Because of the continuous cycle, many features are introduced in them compared to the operational control of other chemical technological processes. Among them, there is a greater risk of transitioning to emergency mode; the return from emergency mode to normal mode must be done very quickly due to the impossibility of stopping the technological process due to the release of defective products during this period (i.e., receiving a loss), accordingly, due to the need for high qualification of the operator managing the process, etc. For these reasons, operational control systems of continuous cycle chemical technological processes have been specifically discussed, as they are different systems. Special features of such systems in control and management were brought out, which determine the characteristics of the construction of control and management systems. To verify the findings, the development of an advisory decision-making information system for operational control of a lime kiln using Fuzzy logic, based on the creation of a relevant expert-targeted knowledge base, was discussed. The control system has been implemented in a real lime production plant with a lime burn kiln, which has shown that suitable and intelligent automation improves operational management, reduces the risks of releasing defective products, and, therefore, reduces costs. The special advisory system was successfully used in the said plant both for the improvement of operational management and, if necessary, for the training of new operators due to the lack of an appropriate training institution.

Keywords: chemical process control systems, continuous cycle industrial technological processes, fuzzy logic, lime kiln

Procedia PDF Downloads 30
468 Wrestling with Religion: A Theodramatic Exploration of Morality in Popular Culture

Authors: Nicholas Fieseler

Abstract:

The nature of religion implicit in popular culture is relevant both in and out of the university. The traditional rules-based conception of religion and the ethical systems that emerge from them do not necessarily convey the behavior of daily life as it exists apart from spaces deemed sacred. This paper proposes to examine the religion implicit in the popular culture phenomenon of professional wrestling and how that affects the understanding of popular religion. Pro wrestling, while frequently dismissed, offers a unique manner through which to re-examine religion in popular culture. A global phenomenon, pro wrestling occupies a distinct space in numerous countries and presents a legitimate reflection of human behavior cross-culturally on a scale few other phenomena can equal. Given its global viewership of millions, it should be recognized as a significant means of interpreting the human attraction to violence and its association with religion in general. Hans Urs von Balthasar’s theory of Theodrama will be used to interrogate the inchoate religion within pro wrestling. While Balthasar developed theodrama within the confines of Christian theology; theodrama contains remarkable versatility in its potential utility. Since theodrama re-envisions reality as drama, the actions of every human actor on the stage contributes to the play’s development, and all action contains some transcendent value. It is in this sense that even the “low brow” activity of pro wrestling may be understood in religious terms. Moreover, a pro wrestling storyline acts as a play within a play: the struggles in a pro wrestling match reflect the human attitudes toward life as it exists in the sacred and profane realms. The indistinct lines separating traditionally good (face) from traditionally bad (heel)wrestlers mirror the moral ambiguity in which many people interpret life. This blurred distinction between good and bad, and large segments of an audience’s embrace of the heel wrestlers, reveal ethical constraints that guide the everyday values of pro wrestling spectators, a moral ambivalence that is often overlooked by traditional religious systems, and which has hitherto been neglected in the academic literature on pro wrestling. The significance of interpreting the religion implicit in pro wrestling through a the dramatic lens extends beyond pro wrestling specifically and can examine the religion implicit in popular culture in general. The use of theodrama mitigates the rigid separation often ascribed to areas deemed sacred/ profane, ortranscendent / immanent, enabling a re-evaluation of religion and ethical systems as practiced in popular culture. The use of theodrama will be expressed by utilizing the pro wrestling match as a literary text that reflects the society from which it emerges. This analysis will also reveal the complex nature of religion in popular culture and provides new directions for the academic study of religion. This project consciously bridges the academic and popular realms. The goal of the research is not to add only to the academic literature on implicit religion in popular culture but to publish it in a form which speaks to those outside the standard academic audiences for such work.

Keywords: ethics, popular religion, professional wrestling, theodrama

Procedia PDF Downloads 142
467 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 530
466 Benefits of an Oral Association of Glycosaminoglycans and Type II Collagene (Glycosane®) on Mobility in Senior Dogs: A Pet-Owner Survey

Authors: Navarro, Delaup, Lacreusette, Jahier, Destaing, Gard

Abstract:

Background: A complementary feed designed to support joint metabolism and contribute to cartilage integrity in dogs was evaluated through a pet-owner study involving 21 senior dogs experiencing a decrease in mobility. The study aimed to assess the product's benefits, ease of use, and impact on quality of life over a 56-day period. Methods: Privately owned dogs over six years old with reduced mobility and no change in their mobility management within the last three months were recruited. They received a chicken cartilage hydrolysate complementary feed containing a complex of glycosaminoglycans and type II collagen (Glycosane®, MP Labo, France. One capsule per dog up to 40 kg, 2 capsules beyond) once a day for 56 days. Assessments were performed at baseline (D0), and subsequent follow-ups at D7, D28, and D56: revised LOAD (Liverpool Osteoarthritis in Dogs) and CBPI (Canine Brief Pain Inventory) were used to evaluate mobility, pain intensity, and pain interference. Owners also completed a questionnaire on quality of life (QoL), comprising 7 questions on the animal’s well-being (QoL1) and 7 questions on the owner’s well-being (QoL2). Statistical analyses were performed using mixed models for repeated measures. The significance levels were set at p<0.05. Results: (1) Population: 21 dogs were included. The mean age was 10.2 years [6 – 14.5]. (2) Mobility: 71% of owners reported enhanced mobility by D56. Improvements were observed in half of the cases after 21 days of supplementation, with notable changes evident as early as 14 days in 39% of cases. LOAD scores showed significant improvement over time (p=0.0019). (3) Comfort: CBPI severity scores decreased significantly from baseline to D28 and D56 (p=0.0300 and p=0.0271, respectively). CBPI QoL score was also significantly improved at D56 compared to D7 (p=0.0440). (4) Quality of life: The QoL total score improved significantly by D56 compared to baseline (p=0.0089), with a specific improvement of the QoL1 (p=0.0015). (4) Owners' insights: Glycosane® received a high ease-of-use rating (mean score 4.4/5), with excellent compliance (95%). Oral intake was rated at 4.3/5. Willingness to walk (19%), Increased activity, Ability to run and/or jump from short heights and a Happier animal (11%) were among the most cited benefits. Owners noted enhanced comfort (78%) and happiness (79%) in their dogs, with a 60% perception of restored good mobility. Conclusion: The complementary feed demonstrates significant benefits in enhancing mobility and quality of life in senior dogs. Its high ease of administration supports owner compliance and satisfaction. These findings support Glycosane® as a valuable nutritional aid in helping to maintain canine mobility. Further studies with larger cohorts and a controlled group are recommended to validate these results.

Keywords: canine mobility, complementary feed, LOAD, CBPI, quality of life, Glycosane

Procedia PDF Downloads 14
465 Comparison of Head Kinematics Resulting from Reconstructed Direct and Non-Direct Head-to-Glass Impacts in Ice Hockey

Authors: Ella Bowles, Alexandra Hughes, Clara Karton, T. Blaine Hoshizaki

Abstract:

As a fast-paced and physical game, body contact is an inevitable component in professional men's ice hockey. Despite efforts and advancements in material engineering to create safer equipment, brain trauma continues to persist and burden hockey players. Head and body contact occur in many ways and vary in terms of impact characteristics including the inbound velocity, force, direction, location, and compliance of the surfaces, which in turn influence head dynamics and brain injury outcomes including concussions. It has been reported that glass and board impacts account for approximately 40% of diagnosed concussions. This type of impact often involves the body (i.e., shoulder) contacting the surface prior to head contact, which may influence the head’s dynamic response by interrupting the head’s initial trajectory.  However, the effect of body-first contact during head impacts is not well understood. The purpose of this research is to compare the head’s kinematic response during direct and non-direct (body-first) head-to-glass impacts representative of ice hockey events. Analysis was performed under varying impact conditions of neck stiffness and impact velocity as they have been shown to influence the resulting head dynamics. Data was collected by video analysis of the 2016-17 NHL season and event reconstructions were performed using a Hybrid III headform, an unbiased neck with tension springs (uONSA), and a high-speed impactor. Direct and non-direct impacts were analyzed at three common velocities (3.0, 5.0, 7.0 m/s), and three neck stiffnesses representing low (25%), medium (75%), and high (100%) contraction. Reconstructions representing non-direct head-to-glass impacts used a shoulder bumper as the first point of contact followed by the head’s contact with the glass. The same method and equipment were used to replicate the direct head impacts, where the head made initial contact with the glass. The dynamic response of the head, specifically the peak resultant linear and rotational acceleration, was collected for each impact and compared between direct and non-direct contact under each condition. The results show that non-direct impacts created an initial head acceleration resulting from shoulder contact, preceding a secondary acceleration response from head contact with the glass. Compared to direct head impacts, non-direct impacts consistently resulted in lower linear and rotational acceleration of the head under all neck stiffness and velocity conditions with an average decrease of 32.56 g and 689.33 rad/s2. However, the linear acceleration produced from shoulder contact in non-direct impacts resulted in a higher response compared to direct impacts with low neck stiffness at 5 m/s (55.2g and 41.2g, respectively) and 7 m/s (76.1g and 73.4g, respectively), and medium neck stiffness at 5 m/s (55.4g and 43.9g, respectively ) and 7 m/s (94.4g and 69.5g, respectively. These findings show that non-direct impacts produce complex scenarios that are further influenced by interaction with neck stiffness and velocity. This research provides an understanding of the fundamentals of body-first impacts. With this basis, an understanding of the implications of body-first head-impacts to better distinguish trauma based on events, and adapt protocols, evaluations, technologies, and equipment accordingly.

Keywords: body-first, concussion, direct, hockey, kinematics

Procedia PDF Downloads 4
464 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 325
463 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations

Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel

Abstract:

Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.

Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS

Procedia PDF Downloads 37
462 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals

Authors: Sami Houry

Abstract:

Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.

Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal

Procedia PDF Downloads 185
461 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 147
460 Literacy Practices in Immigrant Detention Centers: A Conceptual Exploration of Access, Resistance, and Connection

Authors: Mikel W. Cole, Stephanie M. Madison, Adam Henze

Abstract:

Since 2004, the U.S. immigrant detention system has imprisoned more than five million people. President John F. Kennedy famously dubbed this country a “Nation of Immigrants.” Like many of the nation’s imagined ideals, the historical record finds its practices have never lived up to the tenets championed as defining qualities.The United Nations High Commission on Refugees argues the educational needs of people in carceral spaces, especially those in immigrant detention centers, are urgent and supported by human rights guarantees. However, there is a genuine dearth of literacy research in immigrant detention centers, compounded by a general lack of access to these spaces. Denying access to literacy education in detention centers is one way the history of xenophobic immigration policy persists. In this conceptual exploration, first-hand accounts from detained individuals, their families, and the organizations that work with them have been shared with the authors. In this paper, the authors draw on experiences, reflections, and observations from serving as volunteers to develop a conceptual framework for the ways in which literacy practices are enacted in detention centers. Literacy is an essential tool for accessing those detained in immigrant detention centers and a critical tool for those being detained to access legal and other services. One of the most striking things about the detention center is how to behave; gaining access for a visit is neither intuitive nor straightforward. The men experiencing detention are also at a disadvantage. The lack of access to their own documents is a profound barrier to men navigating the complex immigration process. Literacy is much more than a skill for gathering knowledge or accessing carceral spaces; literacy is fundamentally a source of personal empowerment. Frequently men find a way to reclaim their sense of dignity through work on their own terms by exchanging their literacy services for products or credits at the commissary. They write cards and letters for fellow detainees, read mail, and manage the exchange of information between the men and their families. In return, the men who have jobs trade items from the commissary or transfer money to the accounts of the men doing the reading, writing, and drawing. Literacy serves as a form of resistance by providing an outlet for productive work. At its core, literacy is the exchange of ideas between an author and a reader and is a primary source of human connection for individuals in carceral spaces. Father’s Day and Christmas are particularly difficult at detention centers. Men weep when speaking about their children and the overwhelming hopelessness they feel by being separated from them. Yet card-writing campaigns have provided these men with words of encouragement as thousands of hand-written cards make their way to the detention center. There are undoubtedly more literacies being practiced in the immigrant detention center where we work and at other detention centers across the country, and these categories are early conceptions with which we are still wrestling.

Keywords: detention centers, education, immigration, literacy

Procedia PDF Downloads 128
459 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 296
458 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 401
457 Case Report: Massive Deep Venous Thrombosis in a Young Female: A Rare and Fatal Presentation of May-Thurner Syndrome

Authors: Mahmoud Eldeeb, Yousri Mohamed

Abstract:

Background: May-Thurner Syndrome (MTS) is a rare vascular condition caused by the compression of the left common iliac vein by the overlying right common iliac artery, leading to venous stasis and an increased risk of deep vein thrombosis (DVT). While MTS typically presents in young adults, its diagnosis is often delayed due to its nonspecific presentation, which can lead to catastrophic complications like massive pulmonary embolism (PE). Early recognition and intervention are paramount to prevent fatal outcomes. Objectives: Highlight the importance of early recognition and management of critically ill patients presenting with life- and limb-threatening conditions. Raise awareness of May-Thurner Syndrome as a rare but significant cause of extensive DVT in young adults. Emphasize the necessity of a multidisciplinary approach to managing complex vascular emergencies. Methodology: A 21-year-old female presented with a 7-day history of progressive left leg swelling, pain, and skin discoloration following immobilization due to gastroenteritis. Clinical suspicion for massive DVT and compartment syndrome prompted immediate initiation of a heparin bolus and referrals to vascular and orthopedic surgery teams. Bedside Doppler ultrasound confirmed extensive DVT, and subsequent CT venography revealed thrombi extending to the inferior vena cava, consistent with MTS. Despite anticoagulation therapy, angioplasty and stenting were required to restore venous patency. Tragically, the patient experienced a massive PE during the procedure, requiring cardiopulmonary resuscitation (CPR) and transfer to a tertiary center for cardiothoracic intervention. Results: The case highlights the aggressive and life-threatening progression of MTS. The patient’s presentation was characterized by massive DVT with severe pain and discoloration, rapidly culminating in a PE during intervention. The combination of bedside imaging and CT venography facilitated an accurate diagnosis. Despite timely management, the patient’s course underscores the high mortality risk associated with MTS-related thromboembolism. Conclusion: May-Thurner Syndrome, though rare, can lead to devastating complications in young adults if not promptly recognized and treated. This case emphasizes the need for a high index of suspicion in patients presenting with unexplained extensive DVT, especially in the context of limited mobility or other precipitating factors. Multidisciplinary collaboration, including vascular imaging, anticoagulation, and interventional procedures, is critical to optimize outcomes. Urgent recognition and treatment of MTS are vital to prevent progression to massive PE and death.

Keywords: may-thurner syndrome, deep venous thrombosis, pulmonary embolism, vascular emergency, iliac vein compression syndrome

Procedia PDF Downloads 11
456 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 267
455 Sustainable Living Where the Immaterial Matters

Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou

Abstract:

This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?

Keywords: blurring zones, porous borders, spaces of flow, urban recipe

Procedia PDF Downloads 421
454 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 228
453 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 147
452 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 251
451 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017

Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola

Abstract:

The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.

Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines

Procedia PDF Downloads 165
450 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 399
449 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods

Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka

Abstract:

ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.

Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification

Procedia PDF Downloads 352
448 A 4-Month Low-carb Nutrition Intervention Study Aimed to Demonstrate the Significance of Addressing Insulin Resistance in 2 Subjects with Type-2 Diabetes for Better Management

Authors: Shashikant Iyengar, Jasmeet Kaur, Anup Singh, Arun Kumar, Ira Sahay

Abstract:

Insulin resistance (IR) is a condition that occurs when cells in the body become less responsive to insulin, leading to higher levels of both insulin and glucose in the blood. This condition is linked to metabolic syndromes, including diabetes. It is crucial to address IR promptly after diagnosis to prevent long-term complications associated with high insulin and high blood glucose. This four-month case study highlights the importance of treating the underlying condition to manage diabetes effectively. Insulin is essential for regulating blood sugar levels by facilitating the uptake of glucose into cells for energy or storage. In IR individuals, cells are less efficient at taking up glucose from the blood resulting in elevated blood glucose levels. As a result of IR, beta cells produce more insulin to make up for the body's inability to use insulin effectively. This leads to high insulin levels, a condition known as hyperinsulinemia, which further impairs glucose metabolism and can contribute to various chronic diseases. In addition to regulating blood glucose, insulin has anti-catabolic effects, preventing the breakdown of molecules in the body, such as inhibiting glycogen breakdown in the liver, inhibiting gluconeogenesis, and inhibiting lipolysis. If a person is insulin-sensitive or metabolically healthy, an optimal level of insulin prevents fat cells from releasing fat and promotes the storage of glucose and fat in the body. Thus optimal insulin levels are crucial for maintaining energy balance and plays a key role in metabolic processes. During the four-month study, researchers looked at the impact of a low-carb dietary (LCD) intervention on two male individuals (A & B) who had Type-2 diabetes. Althoughvneither of these individuals were obese, they were both slightly overweight and had abdominal fat deposits. Before the trial began, important markers such as fasting blood glucose (FBG), triglycerides (TG), high-density lipoprotein (HDL) cholesterol, and Hba1c were measured. These markers are essential in defining metabolic health, their individual values and variability are integral in deciphering metabolic health. The ratio of TG to HDL is used as a surrogate marker for IR. This ratio has a high correlation with the prevalence of metabolic syndrome and with IR itself. It is a convenient measure because it can be calculated from a standard lipid profile and does not require more complex tests. In this four-month trial, an improvement in insulin sensitivity was observed through the ratio of TG/HDL, which, in turn, improves fasting blood glucose levels and HbA1c. For subject A, HbA1c dropped from 13 to 6.28, and for subject B, it dropped from 9.4 to 5.7. During the trial, neither of the subjects were taking any diabetic medications. The significant improvements in their health markers, such as better glucose control, along with an increase in energy levels, demonstrate that incorporating LCD interventions can effectively manage diabetes.

Keywords: metabolic disorder, insulin resistance, type-2 diabetes, low-carb nutrition

Procedia PDF Downloads 51