Search results for: MCM (mini chromosome manteinance) complex
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5550

Search results for: MCM (mini chromosome manteinance) complex

480 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
479 The Different Effects of Mindfulness-Based Relapse Prevention Group Therapy on QEEG Measures in Various Severity Substance Use Disorder Involuntary Clients

Authors: Yu-Chi Liao, Nai-Wen Guo, Chun‑Hung Lee, Yung-Chin Lu, Cheng-Hung Ko

Abstract:

Objective: The incidence of behavioral addictions, especially substance use disorders (SUDs), is gradually be taken seriously with various physical health problems. Mindfulness-based relapse prevention (MBRP) is a treatment option for promoting long-term health behavior change in recent years. MBRP is a structured protocol that integrates formal meditation practices with the cognitive-behavioral approach of relapse prevention treatment by teaching participants not to engage in reappraisal or savoring techniques. However, considering SUDs as a complex brain disease, questionnaires and symptom evaluation are not sufficient to evaluate the effect of MBRP. Neurophysiological biomarkers such as quantitative electroencephalogram (QEEG) may improve accurately represent the curative effects. This study attempted to find out the neurophysiological indicator of MBRP in various severity SUD involuntary clients. Participants and Methods: Thirteen participants (all males) completed 8-week mindfulness-based treatment provided by trained, licensed clinical psychologists. The behavioral data were from the Severity of Dependence Scale (SDS) and Negative Mood Regulation Scale (NMR) before and afterMBRP treatment. The QEEG data were simultaneously recorded with executive attention tasks, called comprehensive nonverbal attention test(CNAT). The two-way repeated-measures (treatment * severity) ANOVA and independent t-test were used for statistical analysis. Results: Thirteen participants regrouped into high substance dependence (HS) and low substance dependence (LS) by SDS cut-off. The HS group showed more SDS total score and lower gamma wave in the Go/No Go task of CNAT at pretest. Both groups showed the main effect that they had a lower frontal theta/beta ratio (TBR) during the simple reaction time task of CNAT. The main effect showed that the delay errors of CNAT were lower after MBRP. There was no other difference in CNAT between groups. However, after MBRP, compared to LS, the HS group have resonant progress in improving SDS and NMR scores. The neurophysiological index, the frontal TBR of the HS during the Go/No Go task of CNATdecreased than that of the LS group. Otherwise, the LS group’s gamma wave was a significant reduction on the Go/No Go task of CNAT. Conclusion: The QEEG data supports the MBRP can restore the prefrontal function of involuntary addicts and lower their errors in executive attention tasks. However, the improvement of MBRPfor the addict with high addiction severity is significantly more than that with low severity, including QEEG’s indicators and negative emotion regulation. Future directions include investigating the reasons for differences in efficacy among different severity of the addiction.

Keywords: mindfulness, involuntary clients, QEEG, emotion regulation

Procedia PDF Downloads 147
478 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 159
477 A Work-Individual-Family Inquiry on Mental Health and Family Responsibility of Dealers Employed in Macau Gaming Industry

Authors: Tak Mau Simon Chan

Abstract:

While there is growing reflection of the adverse impacts instigated by the flourishing gaming industry on the physical health and job satisfaction of those who work in Macau casinos, there is also a critical void in our understanding of the mental health of croupiers and how casino employment interacts with the family system. From a systemic approach, it would be most effective to examine the ‘dealer issues’ collectively and offer assistance to both the individual dealer and the family system of dealers. Therefore, with the use of a mixed method study design, the levels of anxiety, depression and sleeping quality of a sample of 1124 dealers who are working in Macau casinos have been measured in the present study, and 113 dealers have been interviewed about the impacts of casino employment on their family life. This study presents some very important findings. First, the quantitative study indicates that gender is a significant predictor of depression and anxiety levels, whilst lower income means less quality sleep. The Pearson’s correlation coefficients show that as the Zung Self-rating Anxiety Scale (ZSAS) scores increase, the Zung Self-rating Depression Scale (ZSDS) and Pittsburgh Sleep Quality Index (PSQI) scores will also simultaneously increase. Higher income, therefore, might partly explain for the reason why mothers choose to work in the gaming industry even with shift work involved and a stressful work environment. Second, the findings from the qualitative study show that aside from the positive impacts on family finances, the shift work and job stress to some degree negatively affect family responsibilities and relationships. There are resultant family issues, including missed family activities, and reduced parental care and guidance, marital intimacy, and communication with family members. Despite the mixed views on the gender role differences, the respondents generally agree that female dealers have more family and child-minding responsibilities at home, and thus it is more difficult for them to balance work and family. Consequently, they may be more vulnerable to stress at work. Thirdly, there are interrelationships between work and family, which are based on a systemic inquiry that incorporates work- individual- family. Poor physical and psychological health due to shift work or a harmful work environment could affect not just work performance, but also life at home. Therefore, a few practice points about 1) work-family conflicts in Macau; 2) families-in- transition in Macau; and 3) gender and class sensitivity in Macau; are provided for social workers and family practitioners who will greatly benefit these families, especially whose family members are working in the gaming industry in Macau. It is concluded that in addressing the cultural phenomenon of “dealer’s complex” in Macau, a systemic approach is recommended that addresses both personal psychological needs and family issue of dealers.

Keywords: family, work stress, mental health, Macau, dealers, gaming industry

Procedia PDF Downloads 304
476 Parenting Interventions for Refugee Families: A Systematic Scoping Review

Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai

Abstract:

Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.

Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families

Procedia PDF Downloads 161
475 Designing an Operational Control System for the Continuous Cycle of Industrial Technological Processes Using Fuzzy Logic

Authors: Teimuraz Manjapharashvili, Ketevani Manjaparashvili

Abstract:

Fuzzy logic is a modeling method for complex or ill-defined systems and is a relatively new mathematical approach. Its basis is to consider overlapping cases of parameter values and define operations to manipulate these cases. Fuzzy logic can successfully create operative automatic management or appropriate advisory systems. Fuzzy logic techniques in various operational control technologies have grown rapidly in the last few years. Fuzzy logic is used in many areas of human technological activity. In recent years, fuzzy logic has proven its great potential, especially in the automation of industrial process control, where it allows to form of a control design based on the experience of experts and the results of experiments. The engineering of chemical technological processes uses fuzzy logic in optimal management, and it is also used in process control, including the operational control of continuous cycle chemical industrial, technological processes, where special features appear due to the continuous cycle and correct management acquires special importance. This paper discusses how intelligent systems can be developed, in particular, how fuzzy logic can be used to build knowledge-based expert systems in chemical process engineering. The implemented projects reveal that the use of fuzzy logic in technological process control has already given us better solutions than standard control techniques. Fuzzy logic makes it possible to develop an advisory system for decision-making based on the historical experience of the managing operator and experienced experts. The present paper deals with operational control and management systems of continuous cycle chemical technological processes, including advisory systems. Because of the continuous cycle, many features are introduced in them compared to the operational control of other chemical technological processes. Among them, there is a greater risk of transitioning to emergency mode; the return from emergency mode to normal mode must be done very quickly due to the impossibility of stopping the technological process due to the release of defective products during this period (i.e., receiving a loss), accordingly, due to the need for high qualification of the operator managing the process, etc. For these reasons, operational control systems of continuous cycle chemical technological processes have been specifically discussed, as they are different systems. Special features of such systems in control and management were brought out, which determine the characteristics of the construction of control and management systems. To verify the findings, the development of an advisory decision-making information system for operational control of a lime kiln using fuzzy logic, based on the creation of a relevant expert-targeted knowledge base, was discussed. The control system has been implemented in a real lime production plant with a lime burn kiln, which has shown that suitable and intelligent automation improves operational management, reduces the risks of releasing defective products, and, therefore, reduces costs. The special advisory system was successfully used in the said plant both for the improvement of operational management and, if necessary, for the training of new operators due to the lack of an appropriate training institution.

Keywords: chemical process control systems, continuous cycle industrial technological processes, fuzzy logic, lime kiln

Procedia PDF Downloads 28
474 Wrestling with Religion: A Theodramatic Exploration of Morality in Popular Culture

Authors: Nicholas Fieseler

Abstract:

The nature of religion implicit in popular culture is relevant both in and out of the university. The traditional rules-based conception of religion and the ethical systems that emerge from them do not necessarily convey the behavior of daily life as it exists apart from spaces deemed sacred. This paper proposes to examine the religion implicit in the popular culture phenomenon of professional wrestling and how that affects the understanding of popular religion. Pro wrestling, while frequently dismissed, offers a unique manner through which to re-examine religion in popular culture. A global phenomenon, pro wrestling occupies a distinct space in numerous countries and presents a legitimate reflection of human behavior cross-culturally on a scale few other phenomena can equal. Given its global viewership of millions, it should be recognized as a significant means of interpreting the human attraction to violence and its association with religion in general. Hans Urs von Balthasar’s theory of Theodrama will be used to interrogate the inchoate religion within pro wrestling. While Balthasar developed theodrama within the confines of Christian theology; theodrama contains remarkable versatility in its potential utility. Since theodrama re-envisions reality as drama, the actions of every human actor on the stage contributes to the play’s development, and all action contains some transcendent value. It is in this sense that even the “low brow” activity of pro wrestling may be understood in religious terms. Moreover, a pro wrestling storyline acts as a play within a play: the struggles in a pro wrestling match reflect the human attitudes toward life as it exists in the sacred and profane realms. The indistinct lines separating traditionally good (face) from traditionally bad (heel)wrestlers mirror the moral ambiguity in which many people interpret life. This blurred distinction between good and bad, and large segments of an audience’s embrace of the heel wrestlers, reveal ethical constraints that guide the everyday values of pro wrestling spectators, a moral ambivalence that is often overlooked by traditional religious systems, and which has hitherto been neglected in the academic literature on pro wrestling. The significance of interpreting the religion implicit in pro wrestling through a the dramatic lens extends beyond pro wrestling specifically and can examine the religion implicit in popular culture in general. The use of theodrama mitigates the rigid separation often ascribed to areas deemed sacred/ profane, ortranscendent / immanent, enabling a re-evaluation of religion and ethical systems as practiced in popular culture. The use of theodrama will be expressed by utilizing the pro wrestling match as a literary text that reflects the society from which it emerges. This analysis will also reveal the complex nature of religion in popular culture and provides new directions for the academic study of religion. This project consciously bridges the academic and popular realms. The goal of the research is not to add only to the academic literature on implicit religion in popular culture but to publish it in a form which speaks to those outside the standard academic audiences for such work.

Keywords: ethics, popular religion, professional wrestling, theodrama

Procedia PDF Downloads 141
473 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 528
472 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 325
471 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations

Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel

Abstract:

Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.

Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS

Procedia PDF Downloads 33
470 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals

Authors: Sami Houry

Abstract:

Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.

Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal

Procedia PDF Downloads 185
469 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
468 Literacy Practices in Immigrant Detention Centers: A Conceptual Exploration of Access, Resistance, and Connection

Authors: Mikel W. Cole, Stephanie M. Madison, Adam Henze

Abstract:

Since 2004, the U.S. immigrant detention system has imprisoned more than five million people. President John F. Kennedy famously dubbed this country a “Nation of Immigrants.” Like many of the nation’s imagined ideals, the historical record finds its practices have never lived up to the tenets championed as defining qualities.The United Nations High Commission on Refugees argues the educational needs of people in carceral spaces, especially those in immigrant detention centers, are urgent and supported by human rights guarantees. However, there is a genuine dearth of literacy research in immigrant detention centers, compounded by a general lack of access to these spaces. Denying access to literacy education in detention centers is one way the history of xenophobic immigration policy persists. In this conceptual exploration, first-hand accounts from detained individuals, their families, and the organizations that work with them have been shared with the authors. In this paper, the authors draw on experiences, reflections, and observations from serving as volunteers to develop a conceptual framework for the ways in which literacy practices are enacted in detention centers. Literacy is an essential tool for accessing those detained in immigrant detention centers and a critical tool for those being detained to access legal and other services. One of the most striking things about the detention center is how to behave; gaining access for a visit is neither intuitive nor straightforward. The men experiencing detention are also at a disadvantage. The lack of access to their own documents is a profound barrier to men navigating the complex immigration process. Literacy is much more than a skill for gathering knowledge or accessing carceral spaces; literacy is fundamentally a source of personal empowerment. Frequently men find a way to reclaim their sense of dignity through work on their own terms by exchanging their literacy services for products or credits at the commissary. They write cards and letters for fellow detainees, read mail, and manage the exchange of information between the men and their families. In return, the men who have jobs trade items from the commissary or transfer money to the accounts of the men doing the reading, writing, and drawing. Literacy serves as a form of resistance by providing an outlet for productive work. At its core, literacy is the exchange of ideas between an author and a reader and is a primary source of human connection for individuals in carceral spaces. Father’s Day and Christmas are particularly difficult at detention centers. Men weep when speaking about their children and the overwhelming hopelessness they feel by being separated from them. Yet card-writing campaigns have provided these men with words of encouragement as thousands of hand-written cards make their way to the detention center. There are undoubtedly more literacies being practiced in the immigrant detention center where we work and at other detention centers across the country, and these categories are early conceptions with which we are still wrestling.

Keywords: detention centers, education, immigration, literacy

Procedia PDF Downloads 128
467 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294
466 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 401
465 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 267
464 Sustainable Living Where the Immaterial Matters

Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou

Abstract:

This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?

Keywords: blurring zones, porous borders, spaces of flow, urban recipe

Procedia PDF Downloads 420
463 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 146
462 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249
461 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017

Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola

Abstract:

The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.

Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines

Procedia PDF Downloads 163
460 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 397
459 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods

Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka

Abstract:

ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.

Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification

Procedia PDF Downloads 349
458 A 4-Month Low-carb Nutrition Intervention Study Aimed to Demonstrate the Significance of Addressing Insulin Resistance in 2 Subjects with Type-2 Diabetes for Better Management

Authors: Shashikant Iyengar, Jasmeet Kaur, Anup Singh, Arun Kumar, Ira Sahay

Abstract:

Insulin resistance (IR) is a condition that occurs when cells in the body become less responsive to insulin, leading to higher levels of both insulin and glucose in the blood. This condition is linked to metabolic syndromes, including diabetes. It is crucial to address IR promptly after diagnosis to prevent long-term complications associated with high insulin and high blood glucose. This four-month case study highlights the importance of treating the underlying condition to manage diabetes effectively. Insulin is essential for regulating blood sugar levels by facilitating the uptake of glucose into cells for energy or storage. In IR individuals, cells are less efficient at taking up glucose from the blood resulting in elevated blood glucose levels. As a result of IR, beta cells produce more insulin to make up for the body's inability to use insulin effectively. This leads to high insulin levels, a condition known as hyperinsulinemia, which further impairs glucose metabolism and can contribute to various chronic diseases. In addition to regulating blood glucose, insulin has anti-catabolic effects, preventing the breakdown of molecules in the body, such as inhibiting glycogen breakdown in the liver, inhibiting gluconeogenesis, and inhibiting lipolysis. If a person is insulin-sensitive or metabolically healthy, an optimal level of insulin prevents fat cells from releasing fat and promotes the storage of glucose and fat in the body. Thus optimal insulin levels are crucial for maintaining energy balance and plays a key role in metabolic processes. During the four-month study, researchers looked at the impact of a low-carb dietary (LCD) intervention on two male individuals (A & B) who had Type-2 diabetes. Althoughvneither of these individuals were obese, they were both slightly overweight and had abdominal fat deposits. Before the trial began, important markers such as fasting blood glucose (FBG), triglycerides (TG), high-density lipoprotein (HDL) cholesterol, and Hba1c were measured. These markers are essential in defining metabolic health, their individual values and variability are integral in deciphering metabolic health. The ratio of TG to HDL is used as a surrogate marker for IR. This ratio has a high correlation with the prevalence of metabolic syndrome and with IR itself. It is a convenient measure because it can be calculated from a standard lipid profile and does not require more complex tests. In this four-month trial, an improvement in insulin sensitivity was observed through the ratio of TG/HDL, which, in turn, improves fasting blood glucose levels and HbA1c. For subject A, HbA1c dropped from 13 to 6.28, and for subject B, it dropped from 9.4 to 5.7. During the trial, neither of the subjects were taking any diabetic medications. The significant improvements in their health markers, such as better glucose control, along with an increase in energy levels, demonstrate that incorporating LCD interventions can effectively manage diabetes.

Keywords: metabolic disorder, insulin resistance, type-2 diabetes, low-carb nutrition

Procedia PDF Downloads 40
457 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 144
456 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 163
455 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 111
454 Nuancing the Indentured Migration in Amitav Ghosh's Sea of Poppies

Authors: Murari Prasad

Abstract:

This paper is motivated by the implications of indentured migration depicted in Amitav Ghosh’s critically acclaimed novel, Sea of Poppies (2008). Ghosh’s perspective on the experiences of North Indian indentured labourers moving from their homeland to a distant and unknown location across the seas suggests a radical attitudinal change among the migrants on board the Ibis, a schooner chartered to carry the recruits from Calcutta to Mauritius in the late 1830s. The novel unfolds the life-altering trauma of the bonded servants, including their efforts to maintain a sense of self while negotiating significant social and cultural transformations during the voyage which leads to the breakdown of familiar life-worlds. Equally, the migrants are introduced to an alternative network of relationships to ensure their survival away from land. They relinquish their entrenched beliefs and prejudices and commit themselves to a new brotherhood formed by ‘ship siblings.’ With the official abolition of direct slavery in 1833, the supply of cheap labour to the sugar plantation in British colonies as far-flung as Mauritius and Fiji to East Africa and the Caribbean sharply declined. Around the same time, China’s attempt to prohibit the illegal importation of opium from British India into China threatened the lucrative opium trade. To run the ever-profitable plantation colonies with cheap labour, Indian peasants, wrenched from their village economies, were indentured to plantations as girmitiyas (vernacularized from ‘agreement’) by the colonial government using the ploy of an optional form of recruitment. After the British conquest of the Isle of France in 1810, Mauritius became Britain’s premier sugar colony bringing waves of Indian immigrants to the island. In the articulations of their subjectivities one notices how the recruits cope with the alienating drudgery of indenture, mitigate the hardships of the voyage and forge new ties with pragmatic acts of cultural syncretism in a forward-looking autonomous community of ‘ship-siblings’ following the fracture of traditional identities. This paper tests the hypothesis that Ghosh envisions a kind of futuristic/utopian political collectivity in a hierarchically rigid, racially segregated and identity-obsessed world. In order to ground the claim and frame the complex representations of alliance and love across the boundaries of caste, religion, gender and nation, the essential methodology here is a close textual analysis of the novel. This methodology will be geared to explicate the utopian futurity that the novel gestures towards by underlining new regulations of life during voyage and dissolution of multiple differences among the indentured migrants on board the Ibis.

Keywords: indenture, colonial, opium, sugar plantation

Procedia PDF Downloads 398
453 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 90
452 Highly Selective Phosgene Free Synthesis of Methylphenylcarbamate from Aniline and Dimethyl Carbonate over Heterogeneous Catalyst

Authors: Nayana T. Nivangune, Vivek V. Ranade, Ashutosh A. Kelkar

Abstract:

Organic carbamates are versatile compounds widely employed as pesticides, fungicides, herbicides, dyes, pharmaceuticals, cosmetics and in the synthesis of polyurethanes. Carbamates can be easily transformed into isocyanates by thermal cracking. Isocyantes are used as precursors for manufacturing agrochemicals, adhesives and polyurethane elastomers. Manufacture of polyurethane foams is a major application of aromatic ioscyanates and in 2007 the global consumption of polyurethane was about 12 million metric tons/year and the average annual growth rate was about 5%. Presently Isocyanates/carbamates are manufactured by phosgene based process. However, because of high toxicity of phoegene and formation of waste products in large quantity; there is a need to develop alternative and safer process for the synthesis of isocyanates/carbamates. Recently many alternative processes have been investigated and carbamate synthesis by methoxycarbonylation of aromatic amines using dimethyl carbonate (DMC) as a green reagent has emerged as promising alternative route. In this reaction methanol is formed as a by-product, which can be converted to DMC either by oxidative carbonylation of methanol or by reacting with urea. Thus, the route based on DMC has a potential to provide atom efficient and safer route for the synthesis of carbamates from DMC and amines. Lot of work is being carried out on the development of catalysts for this reaction and homogeneous zinc salts were found to be good catalysts for the reaction. However, catalyst/product separation is challenging with these catalysts. There are few reports on the use of supported Zn catalysts; however, deactivation of the catalyst is the major problem with these catalysts. We wish to report here methoxycarbonylation of aniline to methylphenylcarbamate (MPC) using amino acid complexes of Zn as highly active and selective catalysts. The catalysts were characterized by XRD, IR, solid state NMR and XPS analysis. Methoxycarbonylation of aniline was carried out at 170 °C using 2.5 wt% of the catalyst to achieve >98% conversion of aniline with 97-99% selectivity to MPC as the product. Formation of N-methylated products in small quantity (1-2%) was also observed. Optimization of the reaction conditions was carried out using zinc-proline complex as the catalyst. Selectivity was strongly dependent on the temperature and aniline:DMC ratio used. At lower aniline:DMC ratio and at higher temperature, selectivity to MPC decreased (85-89% respectively) with the formation of N-methylaniline (NMA), N-methyl methylphenylcarbamate (MMPC) and N,N-dimethyl aniline (NNDMA) as by-products. Best results (98% aniline conversion with 99% selectivity to MPC in 4 h) were observed at 170oC and aniline:DMC ratio of 1:20. Catalyst stability was verified by carrying out recycle experiment. Methoxycarbonylation preceded smoothly with various amine derivatives indicating versatility of the catalyst. The catalyst is inexpensive and can be easily prepared from zinc salt and naturally occurring amino acids. The results are important and provide environmentally benign route for MPC synthesis with high activity and selectivity.

Keywords: aniline, heterogeneous catalyst, methoxycarbonylation, methylphenyl carbamate

Procedia PDF Downloads 274
451 Making Sense of C. G. Jung’s Red Book and Black Books: Masonic Rites and Trauma

Authors: Lynn Brunet

Abstract:

In 2019 the author published a book-length study examining Jung’s Red Book. This study consisted of a close reading of each of the chapters in Liber Novus, focussing on the fantasies themselves and Jung’s accompanying paintings. It found that the plots, settings, characters and symbolism in each of these fantasies are not entirely original but remarkably similar to those found in some of the higher degrees of Continental Freemasonry. Jung was the grandson of his namesake, C.G. Jung (1794–1864), who was a Freemason and one-time Grand Master of the Swiss Masonic Lodge. The study found that the majority of Jung’s fantasies are very similar to those of the Ancient and Accepted Scottish Rite, practiced in Switzerland during the time of Jung’s childhood. It argues that the fantasies appear to be memories of a series of terrifying initiatory ordeals conducted using spurious versions of the Masonic rites. Spurious Freemasonry is a term that Masons use for the ‘irregular’ or illegitimate use of the rituals and are not sanctioned by the Order. Since the 1980s there have been multiple reports of ritual trauma amongst a wide variety of organizations, cults and religious groups that psychologists, counsellors, social workers, and forensic scientists have confirmed. The abusive use of Masonic rites features frequently in these reports. This initial study allows a reading of The Red Book that makes sense of the obscure references, bizarre scenarios and intense emotional trauma described by Jung throughout Liber Novus. It suggests that Jung appears to have undergone a cruel initiatory process as a child. The author is currently examining the extra material found in Jung’s Black Books and the results are confirming the original discoveries and demonstrating a number of aspects not covered in the first publication. These include the complex layering of ancient gods and belief systems in answer to Jung’s question, ‘In which underworld am I?’ It demonstrates that the majority of these ancient systems and their gods are discussed in a handbook for the Scottish Rite, Morals and Dogma by Albert Pike, but that the way they are presented by Philemon and his soul is intended to confuse him rather than clarify their purpose. This new study also examines Jung’s soul’s question ‘I am not a human being. What am I then?’ While further themes that emerge from the Black Books include his struggle with vanity and whether he should continue creating his ‘holy book’; and a comparison between Jung’s ‘mystery plays’ and examples from the Theatre of the Absurd. Overall, it demonstrates that Jung’s experience, while inexplicable in his own time, is now known to be the secret and abusive practice of initiation of the young found in a range of cults and religious groups in many first world countries. This paper will present a brief outline of the original study and then examine the themes that have emerged from the extra material found in the Black Books.

Keywords: C. G. Jung, the red book, the black books, masonic themes, trauma and dissociation, initiation rites, secret societies

Procedia PDF Downloads 134