Search results for: check and balance by the court
444 Mechanistic Understanding of the Difference in two Strains Cholerae Causing Pathogens and Predicting Therapeutic Strategies for Cholera Patients Affected with new Strain Vibrio Cholerae El.tor. Using Constrain-based Modelling
Authors: Faiz Khan Mohammad, Saumya Ray Chaudhari, Raghunathan Rengaswamy, Swagatika Sahoo
Abstract:
Cholera caused by pathogenic gut bacteria Vibrio Cholerae (VC), is a major health problem in developing countries. Different strains of VC exhibit variable responses subject to different extracellular medium (Nag et al, Infect Immun, 2018). In this study, we present a new approach to model the variable VC responses in mono- and co-cultures, subject to continuously changing growth medium, which is otherwise difficult via simple FBA model. Nine VC strain and seven E. coli (EC) models were assembled and considered. A continuously changing medium is modelled using a new iterative-based controlled medium technique (ITC). The medium is appropriately prefixed with the VC model secretome. As the flux through the bacteria biomass increases secretes certain by-products. These products shall add-on to the medium, either deviating the nutrient potential or block certain metabolic components of the model, effectively forming a controlled feed-back loop. Different VC models were setup as monoculture of VC in glucose enriched medium, and in co-culture with VC strains and EC. Constrained to glucose enriched medium, (i) VC_Classical model resulted in higher flux through acidic secretome suggesting a pH change of the medium, leading to lowering of its biomass. This is in consonance with the literature reports. (ii) When compared for neutral secretome, flux through acetoin exchange was higher in VC_El tor than the classical models, suggesting El tor requires an acidic partner to lower its biomass. (iii) Seven of nine VC models predicted 3-methyl-2-Oxovaleric acid, mysirtic acid, folic acid, and acetate significantly affect corresponding biomass reactions. (iv) V. parhemolyticus and vulnificus were found to be phenotypically similar to VC Classical strain, across the nine VC strains. The work addresses the advantage of the ITC over regular flux balance analysis for modelling varying growth medium. Future expansion to co-cultures, potentiates the identification of novel interacting partners as effective cholera therapeutics.Keywords: cholera, vibrio cholera El. tor, vibrio cholera classical, acetate
Procedia PDF Downloads 162443 Importance of an E-Learning Program in Stress Field for Postgraduate Courses of Doctors
Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau
Abstract:
Background: Preparing in the stress field (SF) is, increasingly, a concern for doctors of different specialties. Aims: The aim was to evaluate the importance of an e-learning program for doctors postgraduate courses, in SF. Methods: Doctors (n= 40 male, 40 female) of different specialties and ages (31-71 years), who attended postgraduate courses in SF, voluntarily responded to a questionnaire that included the following themes: Importance of SF courses for specialty practiced by each respondent doctor (using visual analogue scale, VAS); What SF themes would be indicated as e-learning (EL); Preferred form of SF information assimilation: Classical lectures (CL), EL or a combination of these methods (CL+EL); Which information on the SF course are facilitated by EL model versus CL; In their view which are the first four advantages and the first four disadvantages of EL compared to CL, for SF. Results: To most respondents, the SF courses are important for the specialty they practiced (VAS by an average of 4). The SF themes suggested to be done as EL were: Stress mechanisms; stress factor models for different medical specialties; stress assessment methods; primary stress management methods for different specialties. Preferred form of information assimilation was CL+EL. Aspects of the course facilitated by EL versus CL model: Active reading of theoretical information, with fast access to keywords details; watching documentaries in everyone's favorite order; practice through tests and the rapid control of results. The first four EL advantages, mentioned for SF were: Autonomy in managing the time allocated to the study; saving time for traveling to the venue; the ability to read information in various contexts of time and space; communication with colleagues, in good times for everyone. The first three EL disadvantages, mentioned for SF were: It decreases capabilities for group discussion and mobilization for active participation; EL information accession may depend on electrical source or/and Internet; learning slowdown can appear, by temptation of postponing the implementation. Answering questions was partially influenced by the respondent's age and genre. Conclusions: 1) Post-graduate courses in SF are of interest to doctors of different specialties. 2) The majority of participating doctors preferred EL, but combined with CL (CL+EL). 3) Preference for EL was manifested mainly by young or middle age men doctors. 4) It is important to balance the proper formula for chosen EL, to be the most efficient, interesting, useful and agreeable.Keywords: stress field, doctors’ postgraduate courses, classical lectures, e-learning lecture
Procedia PDF Downloads 238442 Library Outreach After COVID: Making the Case for In-Person Library Visits
Authors: Lucas Berrini
Abstract:
Academic libraries have always struggled with engaging with students and faculty. Striking the balance between what the community needs and what the library can afford has also been a point of contention for libraries. As academia begins to return to a new normal after COVID, library staff are rethinking how remind patrons that the library is open and ready for business. NC Wesleyan, a small liberal arts school in eastern North Carolina, decided to be proactive and reach out to the academic community. After shutting down in 2020 for COVID, the campus library saw a marked decrease in in-person attendance. For a small school whose operational budget was tied directly to tuition payments, it was imperative for the library to remind faculty and staff that they were open for business. At the beginning of the Summer 2022 term and continuing into the fall, the reference team created a marketing plan using email, physical meetings, and virtual events targeted at students and faculty as well as community members who utilized the facilities prior to COVID. The email blasts were gentle reminders that the building was open and available for use The target audiences were the community at large. Several of the emails contained reminders of previous events in the library that were student centered. The next phase of the email campaign centers on reminding the community about the libraries physical and electronic resources, including the makerspace lab. Language will indicate that student voices are needed, and a QR code is included for students to leave feedback as to what they want to see in the library. The final phase of the email blasts were faculty focused and invited them to connect with library reference staff for an in-person consultation on their research needs. While this phase is ongoing, the response has been positive, and staff are compiling data in hopes of working with administration to implement some of the requested services and materials. These email blasts will be followed up by in-person meetings with faculty and students who responded to the QR codes. This research is ongoing. This type of targeted outreach is new for Wesleyan. It is the hope of the library that by the end of Fall 2022, there will be a plan in place to address the needs and concerns of the students and faculty. Furthermore, the staff hopes to create a new sense of community for the students and staff of the university.Keywords: academic, education, libraries, outreach
Procedia PDF Downloads 94441 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management
Authors: Carin Lindskog
Abstract:
Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.Keywords: business process management, exploitation, exploration, learning activities
Procedia PDF Downloads 124440 Analyzing How Working From Home Can Lead to Higher Job Satisfaction for Employees Who Have Care Responsibilities Using Structural Equation Modeling
Authors: Christian Louis Kühner, Florian Pfeffel, Valentin Nickolai
Abstract:
Taking care of children, dependents, or pets can be a difficult and time-consuming task. Especially for part- and full-time employees, it can feel exhausting and overwhelming to meet these obligations besides working a job. Thus, working mostly at home and not having to drive to the company can save valuable time and stress. This study aims to show the influence that the working model has on the job satisfaction of employees with care responsibilities in comparison to employees who do not have such obligations. Using structural equation modeling (SEM), the three work models, “work from home”, “working remotely”, and a hybrid model, have been analyzed based on 13 influencing constructs on job satisfaction. These 13 factors have been further summarized into three groups “classic influencing factors”, “influencing factors changed by remote working”, and “new remote working influencing factors”. Based on the influencing factors on job satisfaction, an online survey was conducted with n = 684 employees from the service sector. Here, Cronbach’s alpha of the individual constructs was shown to be suitable. Furthermore, the construct validity of the constructs was confirmed by face validity, content validity, convergent validity (AVE > 0.5: CR > 0.7), and discriminant validity. In addition, confirmatory factor analysis (CFA) confirmed the model fit for the investigated sample (CMIN/DF: 2.567; CFI: 0.927; RMSEA: 0.048). The SEM-analysis has shown that the most significant influencing factor on job satisfaction is “identification with the work” with β = 0.540, followed by “Appreciation” (β = 0.151), “Compensation” (β = 0.124), “Work-Life-Balance” (β = 0.116), and “Communication and Exchange of Information” (β = 0.105). While the significance of each factor can vary depending on the work model, the SEM-analysis shows that the identification with the work is the most significant factor in all three work models and, in the case of the traditional office work model, it is the only significant influencing factor. The study shows that among the employees with care responsibilities, the higher the proportion of working from home in comparison to working from the office, the more satisfied the employees are with their job. Since the work models that meet the requirements of comprehensive care led to higher job satisfaction amongst employees with such obligations, adapting as a company to such private obligations by employees can be crucial to sustained success. Conversely, the satisfaction level of the working model where employees work at the office is higher for workers without caregiving responsibilities.Keywords: care responsibilities, home office, job satisfaction, structural equation modeling
Procedia PDF Downloads 83439 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367438 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 247437 Exploration Study of Civet Coffee: Amino Acids Composition and Cup Quality
Authors: Murna Muzaifa, Dian Hasni, Febriani, Anshar Patria, Amhar Abubakar
Abstract:
Coffee flavour is influenced by many factors such as processing techniques. Civet coffee is known as one of premium coffee due to its unique processing technique and its superior cupping quality. The desirable aroma of coffee is foremost formed during roasting step at a high temperature from precursors that are present in the green bean. Sugars, proteins, acids and trigonelline are the principal flavor precursors compounds in green coffee bean. It is now widely accepted that amino acids act as precursors of the Maillard reaction during which the colour and aroma are formed. To investigate amino acids on civet coffee, concentration of 20 amino acids (L-Isoleucine, L-Valine, L-Proline, L-Phenylalanine, L-Arginine, L-Asparagine, L-Threonine, L-Tryptophan, L-Leucine, L-Serine, L-Glutamine, L-Methionine, L-Histidine, Aspartic acid, L-Tyrosine, L-Lysine, L-Glutamic acid, and L-Cysteine, L-Alanine and Glycine) were determined in green and roasted bean of civet coffee by LCMS analysis. The cup quality of civet coffee performed using professional Q-grader followed SCAA standard method. The measured parameters were fragrance/aroma, flavor, acidity, body, uniformity, clean up, aftertaste, balance, sweetness and overall. The work has been done by collecting samples of civet coffee from six locations in Gayo Higland, Aceh-Indonesia. The results showed that 18 amino acids were detected in green bean of civet coffee (L-Isoleucine, L-Valine, L-Proline, L-Phenylalanine, L-Arginine, L-Asparagine, L-Threonine, L-Tryptophan, L-Leucine, L-Serine, L-Glutamine, L-Methionine, L-Histidine, Aspartic acid, L-Tyrosine, L-Lysine, L-Glutamic acid, and L-Cysteine) and 2 amino acids were not detected (L-Alanine and Glycine). On the other hand, L-Tyrosine and Glycine were not detected in roasted been of civet coffee. Glutamic acid is the amino acid with highest concentration in both green and roasted bean (21,02 mg/g and 24,60 mg/g), followed by L- Valine (19,98 mg/g and 20,22 mg/g) and Aspartic acid (14,93 mg/g and 18,58 mg/g). Civet coffee has a fairly high cupping value (cup quality), ranging from 83.75 to 84.75, categorized as speciality coffee. Moreover, civet coffee noted to have nutty, chocolaty, fishy, herby and watery.Keywords: amino acids, civet coffee, cupping quality, luwak
Procedia PDF Downloads 187436 Abuse against Elderly Widows in India and Selected States: An Exploration
Authors: Rasmita Mishra, Chander Shekher
Abstract:
Background: Population ageing is an inevitable outcome of demographic transition. Due to increased life expectancy, the old age population in India and worldwide has increased, and it will continue to grow more alarmingly in the near future. There are redundant austerity that has been bestowed upon the widows, thus, the life of widows is never been easy in India. The loss of spouse along with other disadvantaged socioeconomic intermediaries like illiteracy and poverty often make the life of widows more difficult to live. Methodology: Ethical statement: The study used secondary data available in the public domain for its wider use in social research. Thus, there was no requirement of ethical consent in the present study. Data source: Building a Knowledge Base on Population Aging in India (BKPAI), 2011 dataset is used to fulfill the objectives of this study. It was carried out in seven states – Himachal Pradesh, Kerala, Maharashtra, Odisha, Punjab, Tamil Nadu, and West Bengal – having a higher percentage of the population in the age group 60 years and above compared to the national average. Statistical analysis: Descriptive and inferential statistics were used to understand the level of elderly widows and incidence of abuse against them in India and selected states. Bivariate and Trivariate analysis were carried out to check the pattern of abuse by selected covariates. Chi-Square test is used to verify the significance of the association. Further, Discriminant Analysis (DA) is carried out to understand which factor can separate out group of neglect and non-neglect elderly. Result: With the addition of 27 million from 2001 to 2011, the total elderly population in India is more than 100 million. Elderly females aged 60+ were more widows than their counterpart elderly males. This pattern was observed across selected states and at national level. At national level, more than one tenth (12 percent) of elderly experienced abuse in their lifetime. Incidence of abuse against elderly widows within family was considerably higher than the outside the family. This pattern was observed across the selected place and abuse in the study. In discriminant analysis, the significant difference between neglected and non-neglected elderly on each of the independent variables was examined using group mean and ANOVA. Discussion: The study is the first of its kind to assess the incidence of abuse against elderly widows using large-scale survey data. Another novelty of this study is that it has assessed for those states in India whereby the proportion of elderly is higher than the national average. Place and perpetrators involved in the abuse against elderly widows certainly envisaged the safeness in the present living arrangement of elderly widows. Conclusion: Due to the increasing life expectancy it is expected that the number of elderly will increase much faster than before. As biologically women live longer than men, there will be more women elderly than men. With respect to the living arrangement, after the demise of the spouse, elderly widows are more likely to live with their children who emerged as the main perpetrator of abuse.Keywords: elderly abuse, emotional abuse physical abuse, material abuse, psychological abuse, quality of life
Procedia PDF Downloads 426435 The Conflict Between the Current International Copyright Regime and the Islamic Social Justice Theory
Authors: Abdelrahman Mohamed
Abstract:
Copyright law is a branch of the Intellectual Property Law that gives authors exclusive rights to copy, display, perform, and distribute copyrightable works. In theory, copyright law aims to promote the welfare of society by granting exclusive rights to the creators in exchange for the works that these creators produce for society. Thus, there are two different types of rights that a just regime should balance between them which are owners' rights and users' rights. The paper argues that there is a conflict between the current international copyright regime and the Islamic Social Justice Theory. This regime is unjust from the Islamic Social Justice Theory's perspective regarding access to educational materials because this regime was unjustly established by the colonizers to protect their interests, starting from the Berne Convention for the Protection of Literary and Artistic Works 1886 and reaching to the Trade-Related Aspects of Intellectual Property Rights 1994. Consequently, the injustice of this regime was reflected in the regulations of these agreements and led to an imbalance between the owners' rights and the users' rights in favor of the former at the expense of the latter. As a result, copyright has become a barrier to access to knowledge and educational materials. The paper starts by illustrating the concept of justice in Islamic sources such as the Quran, Sunnah, and El-Maslha-Elmorsalah. Then, social justice is discussed by focusing on the importance of access to knowledge and the right to education. The theory assumes that the right to education and access to educational materials are necessities; thus, to achieve justice in this regime, the users' rights should be granted regardless of their region, color, and financial situation. Then, the paper discusses the history of authorship protection under the Islamic Sharia and to what extent this right was recognized even before the existence of copyright law. According to this theory, the authors' rights should be protected, however, this protection should not be at the expense of the human's rights to education and the right to access to educational materials. Moreover, the Islamic Social Justice Theory prohibits the concentration of wealth among a few numbers of people, 'the minority'. Thus, if knowledge is considered an asset or a good, the concentration of knowledge is prohibited from the Islamic perspective, which is the current situation of the copyright regime where a few countries control knowledge production and distribution. Finally, recommendations will be discussed to mitigate the injustice of the current international copyright regime and to fill the gap between the current international copyright regime and the Islamic Social Justice Theory.Keywords: colonization, copyright, intellectual property, Islamic sharia, social justice
Procedia PDF Downloads 21434 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 30433 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation
Authors: Calorine Twebaze, Jesca Balinga
Abstract:
Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches
Procedia PDF Downloads 59432 Analysis of the Operating Load of Gas Bearings in the Gas Generator of the Turbine Engine during a Deceleration to Dash Maneuver
Authors: Zbigniew Czyz, Pawel Magryta, Mateusz Paszko
Abstract:
The paper discusses the status of loads acting on the drive unit of the unmanned helicopter during deceleration to dash maneuver. Special attention was given for the loads of bearings in the gas generator turbine engine, in which will be equipped a helicopter. The analysis was based on the speed changes as a function of time for manned flight of helicopter PZL W3-Falcon. The dependence of speed change during the flight was approximated by the least squares method and then determined for its changes in acceleration. This enabled us to specify the forces acting on the bearing of the gas generator in static and dynamic conditions. Deceleration to dash maneuvers occurs in steady flight at a speed of 222 km/h by horizontal braking and acceleration. When the speed reaches 92 km/h, it dynamically changes an inclination of the helicopter to the maximum acceleration and power to almost maximum and holds it until it reaches its initial speed. This type of maneuvers are used due to ineffective shots at significant cruising speeds. It is, therefore, important to reduce speed to the optimum as soon as possible and after giving a shot to return to the initial speed (cruising). In deceleration to dash maneuvers, we have to deal with the force of gravity of the rotor assembly, gas aerodynamics forces and the forces caused by axial acceleration during this maneuver. While we can assume that the working components of the gas generator are designed so that axial gas forces they create could balance the aerodynamic effects, the remaining ones operate with a value that results from the motion profile of the aircraft. Based on the analysis, we can make a compilation of the results. For this maneuver, the force of gravity (referring to statistical calculations) respectively equals for bearing A = 5.638 N and bearing B = 1.631 N. As overload coefficient k in this direction is 1, this force results solely from the weight of the rotor assembly. For this maneuver, the acceleration in the longitudinal direction achieved value a_max = 4.36 m/s2. Overload coefficient k is, therefore, 0.44. When we multiply overload coefficient k by the weight of all gas generator components that act on the axial bearing, the force caused by axial acceleration during deceleration to dash maneuver equals only 3.15 N. The results of the calculations are compared with other maneuvers such as acceleration and deceleration and jump up and jump down maneuvers. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: gas bearings, helicopters, helicopter maneuvers, turbine engines
Procedia PDF Downloads 340431 A Generation Outside: Afghan Refugees in Greece 2003-2016
Authors: Kristina Colovic, Mari Janikian, Nikolaos Takis, Fotini-Sonia Apergi
Abstract:
A considerable number of Afghan asylum seekers in Greece are still waiting for answers about their future and status for personal, social and societal advancement. Most have been trapped in a stalemate of continuously postponed or temporarily progressed levels of integration into the EU/Greek process of asylum. Limited quantitative research exists investigating the psychological effects of long-term displacement among Afghans refugees in Greece. The purpose of this study is to investigate factors that are associated with and predict psychological distress symptoms among this population. Data from a sample of native Afghan nationals (N > 70) living in Greece for approximately the last ten years will be collected from May to July 2016. Criteria for participation include the following: being 18 years of age or older, and emigration from Afghanistan to Greece from 2003 onwards (i.e., long-term refugees or part of the 'old system of asylum'). Snowball sampling will be used to recruit participants, as this is considered the most effective option when attempting to study refugee populations. Participants will complete self-report questionnaires, consisting of the Afghan Symptom Checklist (ASCL), a culturally validated measure of psychological distress, the World Health Organization Quality of Life scale (WHOQOL-BREF), an adapted version of the Comprehensive Trauma Inventory-104 (CTI-104), and a modified Psychological Acculturation Scale. All instruments will be translated in Greek, through the use of forward- and back-translations by bilingual speakers of English and Greek, following WHO guidelines. A pilot study with 5 Afghan participants will take place to check for discrepancies in understanding and for further adapting the instruments as needed. Demographic data, including age, gender, year of arrival to Greece and current asylum status will be explored. Three different types of analyses (descriptive statistics, bivariate correlations, and multivariate linear regression) will be used in this study. Descriptive findings for respondent demographics, psychological distress symptoms, traumatic life events and quality of life will be reported. Zero-order correlations will assess the interrelationships among demographic, traumatic life events, psychological distress, and quality of life variables. Lastly, a multivariate linear regression model will be estimated. The findings from the study will contribute to understanding the determinants of acculturation, distress and trauma on daily functioning for Afghans in Greece. The main implications of the current study will be to advocate for capacity building and empower communities through effective program evaluation and design for mental health services for all refugee populations in Greece.Keywords: Afghan refugees, evaluation, Greece, mental health, quality of life
Procedia PDF Downloads 288430 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator
Authors: Ahmed Abdulrahman, Jalal Foroozesh
Abstract:
CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation
Procedia PDF Downloads 176429 The Regulation of Reputational Information in the Sharing Economy
Authors: Emre Bayamlıoğlu
Abstract:
This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy
Procedia PDF Downloads 465428 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 331427 Yield Loss in Maize Due to Stem Borers and Their Integrated Management
Authors: C. P. Mallapur, U. K. Hulihalli, D. N. Kambrekar
Abstract:
Maize (Zea mays L.) an important cereal crop in the world has diversified uses including human consumption, animal feed, and industrial uses. A major constraint in low productivity of maize in India is undoubtedly insect pests particularly two species of stem borers, Chilo partellus (Swinhoe) and Sesamia inferens (Walker). The stem borers cause varying level of yield losses in different agro-climate regions (25.7 to 80.4%) resulting in a huge economic loss to the farmers. Although these pests are rather difficult to manage, efforts have been made to combat the menace by using effective insecticides. However, efforts have been made in the present study to integrate various possible approaches for sustainable management of these borers. Two field experiments were conducted separately during 2016-17 at Main Agricultural Research Station, University of Agricultural Sciences, Dharwad, Karnataka, India. In the first experiment, six treatments were randomized in RBD. The insect eggs at pinhead stage (@ 40 eggs/plant) were stapled to the under surface of leaves covering 15-20 % of plants in each plot after 15 days of sowing. The second experiment was planned with nine treatments replicated thrice. The border crop with NB -21 grass was planted all around the plots in the specific treatments while, cowpea intercrop (@6:1-row proportion) was sown along with the main crop and later, the insecticidal spray with chlorantraniliprole and nimbecidine was taken upon need basis in the specific treatments. The results indicated that the leaf injury and dead heart incidence were relatively more in the treatments T₂ and T₄ wherein, no insect control measures were made after the insect release (58.30 & 40.0 % leaf injury and 33.42 and 25.74% dead heart). On the contrary, these treatments recorded higher stem tunneling (32.4 and 24.8%) and resulted in lower grain yield (17.49 and 26.79 q/ha) compared to 29.04, 32.68, 40.93 and 46.38 q/ha recorded in T₁, T₃, T₅ and T₆ treatments, respectively. A maximum yield loss of 28.89 percent was noticed in T₂ followed by 19.59 percent in T₄ where no sprays were imposed. The data on integrated management trial revealed the lowest stem borer damage (19.28% leaf injury and 1.21% dead heart) in T₅ (seed treatment with thiamethoxam 70FS @ 8ml/kg seed + cow intercrop along with nimbecidine 0.03EC @ 5.0 ml/l and chlorantraniliprole 18.5SC spray @ 0.2 ml/l). The next best treatment was T₆ (ST+ NB-21 borer with nimbecidine and chlorantraniliprole spray) with 21.3 and 1.99 percent leaf injury and dead heart incidence, respectively. These treatments resulted in highest grain yield (77.71 and 75.53 q/ha in T₅ and T₆, respectively) compared to the standard check, T₁ (ST+ chlorantraniliprole spray) wherein, 27.63 percent leaf injury and 3.68 percent dead heart were noticed with 60.14 q/ha grain yield. The stem borers can cause yield loss up to 25-30 percent in maize which can be well tackled by seed treatment with thiamethoxam 70FS @ 8ml/kg seed and sowing the crop along with cowpea as intercrop (6:1 row proportion) or NB-21 grass as border crop followed by application of nimbecidine 0.03EC @ 5.0 ml/l and chlorantraniliprole 18.5SC @ 0.2 ml/l on need basis.Keywords: Maize stem borers, Chilo partellus, Sesamia inferens, crop loss, integrated management
Procedia PDF Downloads 179426 Experimental Evaluation of Contact Interface Stiffness and Damping to Sustain Transients and Resonances
Authors: Krystof Kryniski, Asa Kassman Rudolphi, Su Zhao, Per Lindholm
Abstract:
ABB offers range of turbochargers from 500 kW to 80+ MW diesel and gas engines. Those operate on ships, power stations, generator-sets, diesel locomotives and large, off-highway vehicles. The units need to sustain harsh operating conditions, exposure to high speeds, temperatures and varying loads. They are expected to work at over-critical speeds damping effectively any transients and encountered resonances. Components are often connected via friction joints. Designs of those interfaces need to account for surface roughness, texture, pre-stress, etc. to sustain against fretting fatigue. The experience from field contributed with valuable input on components performance in hash sea environment and their exposure to high temperature, speed and load conditions. Study of tribological interactions of oxide formations provided an insight into dynamic activities occurring between the surfaces. Oxidation was recognized as the dominant factor of a wear. Microscopic inspections of fatigue cracks on turbine indicated insufficient damping and unrestrained structural stress leading to catastrophic failure, if not prevented in time. The contact interface exhibits strongly non-linear mechanism and to describe it the piecewise approach was used. Set of samples representing the combinations of materials, texture, surface and heat treatment were tested on a friction rig under range of loads, frequencies and excitation amplitudes. Developed numerical technique extracted the friction coefficient, tangential contact stiffness and damping. Vast amount of experimental data was processed with the multi-harmonics balance (MHB) method to categorize the components subjected to the periodic excitations. At the pre-defined excitation level both force and displacement formed semi-elliptical hysteresis curves having the same area and secant as the actual ones. By cross-correlating the terms remaining in the phase and out of the phase, respectively it was possible to separate an elastic energy from dissipation and derive the stiffness and damping characteristics.Keywords: contact interface, fatigue, rotor-dynamics, torsional resonances
Procedia PDF Downloads 375425 Effects of Different Food Matrices on Viscosity and Protein Degradation during in vitro Digestion
Authors: Gulay Oncu Ince, Sibel Karakaya
Abstract:
Food is a worldwide concern. Among the factors that have influences on human health, food, nutrition and life style have been regarded as the most important factors since they can be intervened. While some parts of the world has been faced with food shortages and hence, chronic metabolic diseases, the other part of the world have been emerged from over consumption of food. Both situations can result in shorter life expectancy and represent a major global health problem. Hunger, satiety and appetite sensation form a balance ensures the operation of feeding behavior between food intake and energy consumption. Satiety is one of the approaches that is effective in ensuring weight control and avoid eating more in the postprandial period. By manipulating the microstructure of food macro and micronutrient bioavailability may be increased or reduced. For the food industry appearance, texture, taste structural properties as well as the gastrointestinal tract behavior of the food after the consumption is becoming increasingly important. Also, this behavior has been the subject of several researches in recent years by the scientific community. Numerous studies have been published about changing the food matrix in order to increase expected impacts. In this study, yogurts were enriched with caseinomacropeptide (CMP), whey protein (WP), CMP and sodium alginate (SA), and WP + SA in order to produce goat yogurts having different food matrices. SDS Page profiles of the samples after in vitro digestion and viscosities of the stomach digesta at different share rates were determined. Energy values were 62.11kcal/100 g, 70.27 kcal/100 g, 70.61 kcal/100 g, 71.20 kcal/100 g and 71.67 kcal/100 g for control, CMP added WP added, WP + SA added, and CMP + SA added yogurts respectively. The results of viscosity analysis showed that control yogurt had the lowest viscosity value and this was followed by CMP added, WP added, CMP + SA added and WP + SA added yogurts, respectively. Protein contents of the stomach and duedonal digests of the samples after subjected to two different in vitro digestion methods were changed between 5.34-5.91 mg protein / g sample and 16.93-19.75 mg protein /g of sample, respectively. Viscosity measurements of the stomach digests showed that CMP + SA added yogurt displayed the highest viscosity value in both in vitro digestion methods. There were differences between the protein profiles of the stomach and duedonal digests obtained by two different in vitro digestion methods (p<0.05).Keywords: caseinomacropeptide, protein profile, whey protein, yogurt
Procedia PDF Downloads 489424 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities
Authors: Carly Cummings, Maria Soledad Peresin
Abstract:
Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education
Procedia PDF Downloads 68423 Changes in the Lives of Families Having a Child with Cancer
Authors: Ilknur Kahriman, Hacer Kobya Bulut, Birsel C. Demirbag
Abstract:
Introduction and Aim: One of the most challenging aspects of being parents of a child diagnosed with cancer is to balance their normal family life with the child's health needs and treatment requirements. Cancer covers an important part of family life and gets ahead of other matters. Families mostly feel that everything has changed in their lives with the cancer diagnosis and are obliged to make a number of adjustments in their lives. Their normal family life suddenly begins to include treatments, hospital appointments and hospitalizations. This study is a descriptive research conducted to determine the changes in the lives of families who had a child with cancer. Methods: This study was carried out with 65 families having children diagnosed with cancer in 0-17 age group at outpatient pediatric oncology clinic and polyclinic of a university hospital in Trabzon. Data were collected through survey method from August to November, 2015. In the analysis of the data, numbers, percentage and chi-square test were used. Findings: It was found out that the average age of mothers was 35.33 years, most of them were primary school graduates (44.6%) and housewives (89.2%) and the average age of fathers was 39.30 years, most of them were high school graduates (29.2%) and self-employed (43.8% ). The majority of their children were boys and their average age was 7.74 years and 77% had Acute lymphocytic leukemia (ALL) diagnosis. 87.5% of the mothers who had a child with cancer had increased fears in their lives, 84.4% had increased workload at home, 82.8% had more stressful life and 82.8% felt themselves physically tired. The mothers indicated that their healthy children could not do the social activities they had used to do before (56.5%), they no longer fed their healthy children with the food they loved eating so that the sick child did not aspire (52.3%) and their healthy children were more furious than before (53.2%). As for the fathers, the fundamental change they had was increased workload at home (82.3%), had more stressful life (80.6%) and could no longer allocate time to the activities they had been interested in and done before (77.8%). There was not a significant difference between the sick children gender and the changes in their parents lives. The communication between the mothers and their healthy children were determined to be positively affected in the families in which the sick child's disease duration was under 12 months (X2 = 6.452, p = 0.011). Conclusion: This study showed that parents having a child with cancer had more workload at home, had more stressful lives, could not allocate time to social activities, had increased fears, felt themselves tired and their healthy children became more furious and their social activities reduced.Keywords: child, cancer, changes in lives, family
Procedia PDF Downloads 224422 Targeting APP IRE mRNA to Combat Amyloid -β Protein Expression in Alzheimer’s Disease
Authors: Mateen A Khan, Taj Mohammad, Md. Imtaiyaz Hassan
Abstract:
Alzheimer’s disease is characterized by the accumulation of the processing products of the amyloid beta peptide cleaved by amyloid precursor protein (APP). Iron increases the synthesis of amyloid beta peptides, which is why iron is present in Alzheimer's disease patients' amyloid plaques. Iron misregulation in the brain is linked to the overexpression of APP protein, which is directly related to amyloid-β aggregation in Alzheimer’s disease. The APP 5'-UTR region encodes a functional iron-responsive element (IRE) stem-loop that represents a potential target for modulating amyloid production. Targeted regulation of APP gene expression through the modulation of 5’-UTR sequence function represents a novel approach for the potential treatment of AD because altering APP translation can be used to improve both the protective brain iron balance and provide anti-amyloid efficacy. The molecular docking analysis of APP IRE RNA with eukaryotic translation initiation factors yields several models exhibiting substantial binding affinity. The finding revealed that the interaction involved a set of functionally active residues within the binding sites of eIF4F. Notably, APP IRE RNA and eIF4F interaction were stabilized by multiple hydrogen bonds with residues of APP IRE RNA and eIF4F. It was evident that APP IRE RNA exhibited a structural complementarity that tightly fit within binding pockets of eIF4F. The simulation studies further revealed the stability of the complexes formed between RNA and eIF4F, which is crucial for assessing the strength of these interactions and subsequent roles in the pathophysiology of Alzheimer’s disease. In addition, MD simulations would capture conformational changes in the IRE RNA and protein molecules during their interactions, illustrating the mechanism of interaction, conformational change, and unbinding events and how it may affect aggregation propensity and subsequent therapeutic implications. Our binding studies correlated well with the translation efficiency of APP mRNA. Overall, the outcome of this study suggests that the genomic modification and/or inhibiting the expression of amyloid protein by targeting APP IRE RNA can be a viable strategy to identify potential therapeutic targets for AD and subsequently be exploited for developing novel therapeutic approaches.Keywords: Alzheimer's disease, Protein-RNA interaction analysis, molecular docking simulations, conformational dynamics, binding stability, binding kinetics, protein synthesis.
Procedia PDF Downloads 65421 Methodology to Assess the Circularity of Industrial Processes
Authors: Bruna F. Oliveira, Teresa I. Gonçalves, Marcelo M. Sousa, Sandra M. Pimenta, Octávio F. Ramalho, José B. Cruz, Flávia V. Barbosa
Abstract:
The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.Keywords: circular economy, circularity index, sustainability, tannery industry, zero-waste
Procedia PDF Downloads 68420 Healthcare Providers’ Perception Towards Utilization of Health Information Applications and Its Associated Factors in Healthcare Delivery in Health Facilities in Cape Coast Metropolis, Ghana
Authors: Richard Okyere Boadu, Godwin Adzakpah, Nathan Kumasenu Mensah, Kwame Adu Okyere Boadu, Jonathan Kissi, Christiana Dziyaba, Rosemary Bermaa Abrefa
Abstract:
Information and communication technology (ICT) has significantly advanced global healthcare, with electronic health (e-Health) applications improving health records and delivery. These innovations, including electronic health records, strengthen healthcare systems. The study investigates healthcare professionals' perceptions of health information applications and their associated factors in the Cape Coast Metropolis of Ghana's health facilities. Methods: We used a descriptive cross-sectional study design to collect data from 632 healthcare professionals (HCPs), in the three purposively selected health facilities in the Cape Coast municipality of Ghana in July 2022. Shapiro-Wilk test was used to check the normality of dependent variables. Descriptive statistics were used to report means with corresponding standard deviations for continuous variables. Proportions were also reported for categorical variables. Bivariate regression analysis was conducted to determine the factors influencing the Benefits of Information Technology (BoIT); Barriers to Information Technology Use (BITU); and Motives of Information Technology Use (MoITU) in healthcare delivery. Stata SE version 15 was used for the analysis. A p-value of less than 0.05 served as the basis for considering a statistically significant accepting hypothesis. Results: Healthcare professionals (HCPs) generally perceived moderate benefits (Mean score (M)=5.67) from information technology (IT) in healthcare. However, they slightly agreed that barriers like insufficient computers (M=5.11), frequent system downtime (M=5.09), low system performance (M=5.04), and inadequate staff training (M=4.88) hindered IT utilization. Respondents slightly agreed that training (M=5.56), technical support (M=5.46), and changes in work procedures (M=5.10) motivated their IT use. Bivariate regression analysis revealed significant influences of education, working experience, healthcare profession, and IT training on attitudes towards IT utilization in healthcare delivery (BoIT, BITU, and MoITU). Additionally, the age of healthcare providers, education, and working experience significantly influenced BITU. Ultimately, age, education, working experience, healthcare profession, and IT training significantly influenced MoITU in healthcare delivery. Conclusions: Healthcare professionals acknowledge moderate benefits of IT in healthcare but encounter barriers like inadequate resources and training. Motives for IT use include staff training and support. Bivariate regression analysis shows education, working experience, profession, and IT training significantly influence attitudes toward IT adoption. Targeted interventions and policies can enhance IT utilization in the Cape Coast Metropolis, Ghana.Keywords: health information application, utilization of information application, information technology use, healthcare
Procedia PDF Downloads 66419 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling
Authors: Syed Masood Hussain
Abstract:
An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter
Procedia PDF Downloads 547418 The Boy Who Cried Wolf-North Korea Nuclear Test and Its Implication to the Regional Stability
Authors: Mark Wenyi Lai
Abstract:
The very lethal weapon of nuclear warhead had threatened the survival of the world for half of the 20th century. When most of the countries have already denounced and stopped the development, one country is eager to produce and use them. Since 2006, Pyongyang has launched six times of nuclear tests. The most recent one in September 2017 signaled North Korea’s military capability to project the mass destruction through ICBM (Intercontinental Ballistic Missile) over Seoul, Tokyo, Guam, Hawaii, Alaska or probably the West Coast of the United States with the explosive energy ten times of the atom bombing of Hiroshima in 1945. This research paper adopted time-series content analysis focusing on the related countries responses to North Korea’s tests in 2006, 2009, 2013, and 2016. The preliminary hypotheses are first, North Korea determined to protect the regime by having triad nuclear capability. Negotiations are mere means to this end. Second, South Korea is paralyzed by its ineffective domestic politics and unable to develop its independent strategy toward the North. Third, Japan was using the external threat to campaign for its rearmament plan and brought instability in foreign relations. Fourth, China found herself in the strange position of defending the loyal buffer state meanwhile witnessing the fourth and dangerous neighboring country gaining the card into nuclear club. Fifth, the United States had admitted that North Korea’s going nuclear is unstoppable. Therefore, to keep the regional stability in the East Asia, the US relied on the new balance of power formed by everyone versus Pyongyang. But, countries in East Asia actually have problems getting along with each other. Sixth, Russia distanced herself from the North Kore row but benefitted by advancing its strategic importance in the Far East. Tracing back the history of nuclear states, this research paper concluded that North Korea will head on becoming a more confident country. The regional stability will restore once related countries deal with the new fact and treat Pyongyang regime with a new strategy. The gradual opening and economic reform are on the way for the North Korea in the near future.Keywords: nuclear test, North Korea, six party talk, US foreign policy
Procedia PDF Downloads 281417 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)
Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni
Abstract:
Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.Keywords: numerical methods, finite difference method, heat transfer, Scilab
Procedia PDF Downloads 387416 Investigating the Need to Align with and Adapt Sustainability of Cotton
Authors: Girija Jha
Abstract:
This paper investigates the need of cotton to integrate sustainability. The methodology used in the paper is to do secondary research to find out the various environmental implications of cotton as textile material across its life cycle and try to look at ways and possibilities of minimizing its ecological footprint. Cotton is called ‘The Fabric of Our Lives’. History is replete with examples where this fabric used to be more than a fabric of lives. It used to be a miracle fabric, a symbol India’s pride and social Movement of Swaraj, Gandhijee’s clarion call to self reliance. Cotton is grown in more than 90 countries across the globe on 2.5 percent of the world's arable land in countries like China, India, United States, etc. accounting for almost three fourth of global production. But cotton as a raw material has come under the scanner of sustainability experts because of myriad reasons a few have been discussed here. It may take more than 20,000 liters of water to produce 1kg of cotton. Cotton harvest is primarily done from irrigated land which leads to Salinization and depletion of local water reservoirs, e.g., Drying up of Aral Sea. Cotton is cultivated on 2.4% of total world’s crop land but accounts for 24% usage of insecticide and shares the blame of 11% usage of pesticides leading to health hazards and having an alarmingly dangerous impact on the ecosystem. One of the possible solutions to these problems as proposed was GM, Genetically Modified cotton crop. However, use of GM cotton is still debatable and has many ethical issues. The practice of mass production and increasing consumerism and especially fast fashion has been major culprits to disrupt this delicate balance. Disposable fashion or fast fashion is on the rise and cotton being one of the major choices adds on to the problem. Denims – made of cotton and have a strong fashion statement and the washes being an integral part of their creation they share a lot of blame. These are just a few problems listed. Today Sustainability is the need of the hour and it is inevitable to incorporate have major changes in the way we cultivate and process cotton to make it a sustainable choice. The answer lies in adopting minimalism and boycotting fast fashion, in using Khadi, in saying no to washed denims and using selvedge denims or using better methods of finishing the washed out fabric so that the environment does not bleed blue. Truly, the answer lies in integrating state of art technology with age old sustainable practices so that the synergy of the two may help us come out of the vicious circle.Keywords: cotton, sustainability, denim, Khadi
Procedia PDF Downloads 156415 Factors of Non-Conformity Behavior and the Emergence of a Ponzi Game in the Riba-Free (Interest-Free) Banking System of Iran
Authors: Amir Hossein Ghaffari Nejad, Forouhar Ferdowsi, Reza Mashhadi
Abstract:
In the interest-free banking system of Iran, the savings of society are in the form of bank deposits, and banks using the Islamic contracts, allocate the resources to applicants for obtaining facilities and credit. In the meantime, the central bank, with the aim of introducing monetary policy, determines the maximum interest rate on bank deposits in terms of macroeconomic requirements. But in recent years, the country's economic constraints with the stagflation and the consequence of the institutional weaknesses of the financial market of Iran have resulted in massive disturbances in the balance sheet of the banking system, resulting in a period of mismatch maturity in the banks' assets and liabilities and the implementation of a Ponzi game. This issue caused determination of the interest rate in long-term bank deposit contracts to be associated with non-observance of the maximum rate set by the central bank. The result of this condition was in the allocation of new sources of equipment to meet past commitments towards the old depositors and, as a result, a significant part of the supply of equipment was leaked out of the facilitating cycle and credit crunch emerged. The purpose of this study is to identify the most important factors affecting the occurrence of non-confirmatory financial banking behavior using data from 19 public and private banks of Iran. For this purpose, the causes of this non-confirmatory behavior of banks have been investigated using the panel vector autoregression method (PVAR) for the period of 2007-2015. Granger's causality test results suggest that the return of parallel markets for bank deposits, non-performing loans and the high share of the ratio of facilities to banks' deposits are all a cause of the formation of non-confirmatory behavior. Also, according to the results of impulse response functions and variance decomposition, NPL and the ratio of facilities to deposits have the highest long-term effect and also have a high contribution to explaining the changes in banks' non-confirmatory behavior in determining the interest rate on deposits.Keywords: non-conformity behavior, Ponzi Game, panel vector autoregression, nonperforming loans
Procedia PDF Downloads 218