Search results for: adjustment orders
80 Experiences of Pediatric Cancer Patients and Their Families: A Focus Group Interview
Authors: Bu Kyung Park
Abstract:
Background: The survival rate of pediatric cancer patients has been increased. Thus, the needs of long-term management and follow-up education after discharge continue to grow. Purpose: The purpose of this study was to explore the experiences of pediatric cancer patients and their families from first diagnosis to returning their social life. The ultimate goal of this study was to assess which information and intervention did pediatric cancer patients and their families required and needed, so that this could provide fundamental information for developing educational content of web-based intervention program for pediatric cancer patients. Research Approach: This study was based on a descriptive qualitative research design using semi-structured focus group interview. Participants: Twelve pediatric cancer patients and 12 family members participated in a total six focus group interview sessions. Methods: All interviews were audiotaped after obtaining participants’ approval. The recordings were transcribed. Qualitative Content analysis using the inductive coding approach was performed on the transcriptions by three coders. Findings: Eighteen categories emerged from the six main themes: 1) Information needs, 2) Support system, 3) Barriers to treatment, 4) Facilitators to treatment, 5) Return to social life, 6) Healthcare system issues. Each theme had both pediatric cancer patients’ codes and their family members’ codes. Patients and family members had high information needs through the whole process of treatment, not only the first diagnosis but also after completion of treatment. Hospitals provided basic information on chemo therapy, medication, and various examinations. However, they were more likely to rely on information from other patients and families by word of mouth. Participants’ information needs were different according to their treatment stage (e.g., first admitted patients versus cancer survivors returning to their social life). Even newly diagnosed patients worried about social adjustment after completion of all treatment, such as return to school and diet and physical activity at home. Most family members had unpleasant experiences while they were admitted in hospitals and concerned about healthcare system issues, such as medical error and patient safety. Conclusions: In conclusion, pediatric cancer patients and their family members wanted information source which can provide tailored information based on their needs. Different information needs with patients and their family members based on their diagnosis, progress, stage of treatment were identified. Findings from this study will be used to develop a patient-centered online health intervention program for pediatric cancer patients. Pediatric cancer patients and their family members had variety fields of education needs and soak the information from various sources. Web-based health intervention program for them is required to satisfy their inquiries to provide reliable information.Keywords: focus group interview, family caregivers, pediatric cancer patients, qualitative content analysis
Procedia PDF Downloads 18179 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks
Authors: Daniela Pellin, Wilson Engelmann
Abstract:
Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.Keywords: development, governance of law, institutionalization, triple helix
Procedia PDF Downloads 14078 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 25777 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 14376 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context
Authors: Ana Breda, Catarina Cruz
Abstract:
Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game
Procedia PDF Downloads 2775 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 4074 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery
Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu
Abstract:
Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor
Procedia PDF Downloads 27073 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8972 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 13971 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors
Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic
Abstract:
If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide
Procedia PDF Downloads 26770 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 26569 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 16468 Hydrogeomatic System for the Economic Evaluation of Damage by Flooding in Mexico
Authors: Alondra Balbuena Medina, Carlos Diaz Delgado, Aleida Yadira Vilchis Fránces
Abstract:
In Mexico, each year news is disseminated about the ravages of floods, such as the total loss of housing, damage to the fields; the increase of the costs of the food, derived from the losses of the harvests, coupled with health problems such as skin infection, etc. In addition to social problems such as delinquency, damage in education institutions and the population in general. The flooding is a consequence of heavy rains, tropical storms and or hurricanes that generate excess water in drainage systems that exceed its capacity. In urban areas, heavy rains can be one of the main factors in causing flooding, in addition to excessive precipitation, dam breakage, and human activities, for example, excessive garbage in the strainers. In agricultural areas, these can hardly achieve large areas of cultivation. It should be mentioned that for both areas, one of the significant impacts of floods is that they can permanently affect the livelihoods of many families, cause damage, for example in their workplaces such as farmlands, commercial or industry areas and where services are provided. In recent years, Information and Communication Technologies (ICT) have had an accelerated development, being reflected in the growth and the exponential evolution of the innovation giving; as a result, the daily generation of new technologies, updates, and applications. Innovation in the development of Information Technology applications has impacted on all areas of human activity. They influence all the orders of life of individuals, reconfiguring the way of perceiving and analyzing the world such as, for instance, interrelating with people as individuals and as a society, in the economic, political, social, cultural, educational, environmental, etc. Therefore the present work describes the creation of a system of calculation of flood costs for housing areas, retail establishments and agricultural areas from the Mexican Republic, based on the use and application of geotechnical tools being able to be useful for the benefit of the sectors of public, education and private. To generate analysis of hydrometereologic affections and with the obtained results to realize the Geoinformatics tool was constructed from two different points of view: the geoinformatic (design and development of GIS software) and the methodology of flood damage validation in order to integrate a tool that provides the user the monetary estimate of the effects caused by the floods. With information from the period 2000-2014, the functionality of the application was corroborated. For the years 2000 to 2009 only the analysis of the agricultural and housing areas was carried out, incorporating for the commercial establishment's information of the period 2010 - 2014. The method proposed for the resolution of this research project is a fundamental contribution to society, in addition to the tool itself. Therefore, it can be summarized that the problems that are in the physical-geographical environment, conceiving them from the point of view of the spatial analysis, allow to offer different alternatives of solution and also to open up slopes towards academia and research.Keywords: floods, technological innovation, monetary estimation, spatial analysis
Procedia PDF Downloads 22467 Choking among Babies, Toddlers and Children with Special Needs: A Review of Mechanisms, Implications, Incidence, and Recommendations of Professional Prevention Guidelines
Authors: Ella Abaev, Shany Segal, Miri Gabay
Abstract:
Background: Choking is a blockage of airways that prevents efficient breathing and air flow to the lungs. Choking may be partial or full and is an emergency situation. Complete or prolonged choking leads to apnea, lack of oxygen in the tissues of the body and brain, and can cause death. There are three mechanisms of choking: obstruction of internal respiratory tracts by food or object aspiration, any material that blocks or covers external air passages, external pressure on the neck or trapping between objects. Children's airways are narrower than that of adults and therefore the risk of choking is greater, due to the aspiration of food and other foreign bodies into the lungs. In the Child Development Center at Safra Children’s Hospital, Tel Hashomer in Israel are treated infants, toddlers, and children aged 0-18 years with various developmental disabilities. Due to the increase in reports of ‘almost an event’ of choking in the past year and the serious consequences of choking event, it was decided to give an emphasis to the issue. Incidence and methods: The number of reports of ‘almost an event’ or a choking event was examined at the center during the years 2013-2018 and a thorough research work was conducted on the subject in order to build a prevention program. Findings: Between 2013 and 2018 the center reported about ten cases of ‘almost choking events’. In the middle of 2018 alone three cases of ‘almost an event’ were reported. Objective: Providing knowledge leads to awareness raise, change of perception, change in behavior and prevention. The center employs more than 130 staff members from various sectors so that it is the work of multi-professional teams to promote the quality and safety of the treatment. The familiarity of the staff with risk factors, prevention guidelines, identification of choking signs, and treatment are most important and significant in determining the outcome of a choking event. Conclusions and recommendations: After in-depth research work was carried out in cooperation with the Risk Management Unit on the subject of choking, which include a description of the definitions, mechanisms, risk factors, treatment methods and extensive recommendations for prevention (e.g. using treatment and stimulation accessories with standards association stamps and adjustment of the type of food and the way it is served to match to the child's age and the ability to swallow). The expected stages of development and emphasis on the population of children with special needs were taken into account. The research findings will be published by the staff and parents of the patients, professional publications, and lectures and there is an expectation to decrease the number of choking events in the next years.Keywords: children with special needs, choking, educational system, prevention guidelines
Procedia PDF Downloads 17966 Nature of Body Image Distortion in Eating Disorders
Authors: Katri K. Cornelissen, Lise Gulli Brokjob, Kristofor McCarty, Jiri Gumancik, Martin J. Tovee, Piers L. Cornelissen
Abstract:
Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders.Keywords: body image distortion, perception, recovery, relapse, BMI, eating disorders
Procedia PDF Downloads 6865 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties
Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts
Abstract:
Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition
Procedia PDF Downloads 23164 Audit of Intraoperative Ventilation Strategy in Prolonged Abdominal Surgery
Authors: Prabir Patel, Eugene Ming Han Lim
Abstract:
Introduction: Current literature shows that postoperative pulmonary complications following abdominal surgery may be reduced by using lower than conventional tidal volumes intraoperatively together with moderate levels of positive end expiratory pressure (PEEP). The recent studies demonstrated significant reduction demonstrated significant reduction in major complications in elective abdominal surgery through the use of lower tidal volumes (6-8 ml/kg predicted body weight), PEEP of 5 cmH20 and recruitment manoeuvres compared to higher ‘conventional’ volumes (10-12 mls/kg PBW) without lung recruitment. Our objective was to retrospectively audit current practice for patients undergoing major abdominal surgery in Sir Charles Gairdner Hospital. Methods: Patients over 18 undergoing elective general surgery lasting more than 3 hours and intubated during the duration of procedure were included in this audit. Data was collected over a 6 month period. Patients who had hepatic surgery, procedures necessitating one-lung ventilation, transplant surgery, documented history of pulmonary or intracranial hypertension were excluded. Results: 58 suitable patients were identified and notes were available for 54 patients. Key findings: Average peak airway pressure was 21cmH20 (+4), average peak airway pressure was less than 30 cmH20 in all patients, and less than 25 cmH20 in 80% of the cases. PEEP was used in 81% of the cases. Where PEEP was used, 75% used PEEP more than or equal to 5 cmH20. Average tidal volume per actual body weight was 7.1 ml/kg (+1.6). Average tidal volume per predicted body weight (PBW) was 8.8 ml/kg (+1.5). Average tidal volume was less than 10 ml/kg PBW in 90% of cases; 6-8 ml/kg PBW in 40% of the cases. There was no recorded use of recruitment manoeuvres in any cases. Conclusions: In the vast majority of patients undergoing prolonged abdominal surgery, a lung protective strategy using moderate levels of PEEP, peak airway pressures of less than 30 cmH20 and tidal volumes of less than 10 cmH20/kg PBW was utilised. A recent randomised control trial demonstrated benefit from utilising even lower volumes (6-8 mls/kg) based on findings in critical care patients, but this was compared to volumes of 10-12 ml/kg. Volumes of 6-8 ml/kg PBW were utilised in 40% of cases in this audit. Although theoretically beneficial, clinical benefit of lower volumes than what is currently practiced in this institution remains to be seen. The incidence of pulmonary complications was much lower than in the other cited studies and a larger data set would be required to investigate any benefit from lower tidal volume ventilation. The volumes used are comparable to results from published local and international data but PEEP utilisation was higher in this audit. Strategies that may potentially be implemented to ensure and maintain best practice include pre-operative recording of predicted body weight, adjustment of default ventilator settings and education/updates of current evidence.Keywords: anaesthesia, intraoperative ventilation, PEEP, tidal volume
Procedia PDF Downloads 76563 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 12562 The Symbolic Power of the IMF: Looking through Argentina’s New Period of Indebtedness
Authors: German Ricci
Abstract:
The research aims to analyse the symbolic power of the International Monetary Fund (IMF) in its relationship with a borrowing country, drawing upon Pierre Bourdieu’s Field Theory. This theory of power, typical of constructivist structuralism, has been minor used in international relations. Thus, selecting this perspective offers a new understanding of how the IMF's power operates and is structured. The IMF makes periodic economic reviews in which the staff evaluates the Government's performance. It also offers “last instance” loans when private external credit is not accessible. This relationship generates great expectations in financial agents because the IMF’s statements indicate the capacity of the Nation-State to meet its payment obligations (or not). Therefore, it is argued that the IMF is a legitimate actor for financial agents concerned about a government facing an economic crisis both for the effects of its immediate economic contribution through loans and the promotion of adjustment programs, helpful to guarantee the payment of the external debt. This legitimacy implies a symbolic power relationship in addition to the already known economic power relationship. Obtaining the IMF's consent implies that the government partially puts its political-economic decisions into play since the monetary policy must be agreed upon with the Fund. This has consequences at the local level. First, it implies that the debtor state must establish a daily relationship with the Fund. This everyday interaction with the Fund influences how officials and policymakers internalize the meaning of political management. On the other hand, if the Government has access to the IMF's seal of approval, the State will be again in a position to re-enter the financial market and go back into debt to face external debt. This means that private creditors increase the chances of collecting the debt and, again, grant credits. Thus, it is argued that the borrowing country submits to the relationship with the IMF in search of the latter's economic and symbolic capital. Access to this symbolic capital has objective and subjective repercussions at the national level that might tend to reproduce the relevance of the financial market and legitimizes the IMF’s intervention during economic crises. The paper has Argentina as its case study, given its historical relationship with the IMF and the relevance of the current indebtedness period, which remains largely unexplored. Argentina’s economy is characterized by recurrent financial crises, and it is the country to which the Fund has lent the most in its entire history. It surpasses more than three times the second, Egypt. In addition, Argentina is currently the country that owes the most to the Fund after receiving the largest loan ever granted by the IMF in 2018, and a new agreement in 2022. While the historical strong association with the Fund culminated in the most acute economic and social crisis in the country’s contemporary history, producing an unprecedented political and institutional crisis in 2001, Argentina still recognized the IMF as the only way out during economic crises.Keywords: IMF, fields theory, symbolic power, Argentina, Bourdieu
Procedia PDF Downloads 7161 A Formal Microlectic Framework for Biological Circularchy
Authors: Ellis D. Cooper
Abstract:
“Circularchy” is supposed to be an adjustable formal framework with enough expressive power to articulate biological theory about Earthly Life in the sense of multi-scale biological autonomy constrained by non-equilibrium thermodynamics. “Formal framework” means specifically a multi-sorted first-order-theorywithequality (for each sort). Philosophically, such a theory is one kind of “microlect,” which means a “way of speaking” (or, more generally, a “way of behaving”) for overtly expressing a “mental model” of some “referent.” Other kinds of microlect include “natural microlect,” “diagrammatic microlect,” and “behavioral microlect,” with examples such as “political theory,” “Euclidean geometry,” and “dance choreography,” respectively. These are all describable in terms of a vocabulary conforming to grammar. As aspects of human culture, they are possibly reminiscent of Ernst Cassirer’s idea of “symbolic form;” as vocabularies, they are akin to Richard Rorty’s idea of “final vocabulary” for expressing a mental model of one’s life. A formal microlect is presented by stipulating sorts, variables, calculations, predicates, and postulates. Calculations (a.k.a., “terms”) may be composed to form more complicated calculations; predicates (a.k.a., “relations”) may be logically combined to form more complicated predicates; and statements (a.k.a., “sentences”) are grammatically correct expressions which are true or false. Conclusions are statements derived using logical rules of deduction from postulates, other assumed statements, or previously derived conclusions. A circularchy is a formal microlect constituted by two or more sub-microlects, each with its distinct stipulations of sorts, variables, calculations, predicates, and postulates. Within a sub-microlect some postulates or conclusions are equations which are statements that declare equality of specified calculations. An equational bond between an equation in one sub-microlect and an equation in either the same sub-microlect or in another sub-microlect is a predicate that declares equality of symbols occurring in a side of one equation with symbols occurring in a side of the other equation. Briefly, a circularchy is a network of equational bonds between sub-microlects. A circularchy is solvable if there exist solutions for all equations that satisfy all equational bonds. If a circularchy is not solvable, then a challenge would be to discover the obstruction to solvability and then conjecture what adjustments might remove the obstruction. Adjustment means changes in stipulated ingredients (sorts, etc.) of sub-microlects, or changes in equational bonds between sub-microlects, or introduction of new sub-microlects and new equational bonds. A circularchy is modular insofar as each sub-microlect is a node in a network of equation bonds. Solvability of a circularchy may be conjectured. Efforts to prove solvability may be thwarted by a counter-example or may lead to the construction of a solution. An automated theorem-proof assistant would likely be necessary for investigating a substantial circularchy, such as one purported to represent Earthly Life. Such investigations (chains of statements) would be concurrent with and no substitute for simulations (chains of numbers).Keywords: autonomy, first-order theory, mathematics, thermodynamics
Procedia PDF Downloads 22060 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 11959 Examining the Independent Effects of Early Exposure to Game Consoles and Parent-Child Activities on Psychosocial Development
Authors: Rosa S. Wong, Keith T. S. Tung, Frederick K. Ho, Winnie W. Y. Tso, King-wa Fu, Nirmala Rao, Patrick Ip
Abstract:
As technology advances, exposures in early childhood are no longer confined to stimulations in the surrounding physical environments. Children nowadays are also subject to influences from the digital world. In particular, early access to game consoles can cause risks to child development, especially when the game is not developmentally appropriate for young children. Overstimulation is possible and could impair brain development. On the other hand, recreational parent-child activities, including outdoor activities and visits to museums, require child interaction with parents, which is beneficial for developing adaptive emotion regulation and social skills. Given the differences between these two types of exposures, this study investigated and compared the independent effects of early exposure to a game console and early play-based parent-child activities on children’s long-term psychosocial outcomes. This study used data from a subset of children (n=304, 142 male and 162 female) in the longitudinal cohort study, which studied the long-term impact of family socioeconomic status on child development. In 2012/13, we recruited a group of children at Kindergarten 3 (K3) randomly from Hong Kong local kindergartens and collected data regarding their duration of exposure to game console and recreational parent-child activities at that time. In 2018/19, we re-surveyed the parents of these children who were matriculated as Form 1 (F1) students (ages ranging from 11 to 13 years) in secondary schools and asked the parents to rate their children’s psychosocial problems in F1. Linear regressions were conducted to examine the associations between early exposures and adolescent psychosocial problems with and without adjustment for child gender and K3 family socioeconomic status. On average, K3 children spent about 42 minutes on a game console every day and had 2-3 recreational activities with their parents every week. Univariate analyses showed that more time spent on game consoles at K3 was associated with more psychosocial difficulties in F1 particularly more externalizing problems. The effect of early exposure to game console on externalizing behavior remained significant (B=0.59, 95%CI: 0.15 to 1.03, p=0.009) after adjusting for recreational parent-child activities and child gender. For recreational parent-child activities at K3, its effect on overall psychosocial difficulties became insignificant after adjusting for early exposure to game consoles and child gender. However, it was found to have significant protective effect on externalizing problems (B=-0.65, 95%CI: -1.23 to -0.07, p=0.028) even after adjusting for the confounders. Early exposure to game consoles has negative impact on children’s psychosocial health, whereas play-based parent-child activities can foster positive psychosocial outcomes. More efforts should be directed to propagate the risks and benefits of these activities and urge the parents and caregivers to replace child-alone screen time with parent-child play time in daily routine.Keywords: early childhood, electronic device, parenting, psychosocial wellbeing
Procedia PDF Downloads 16758 Association of Depression with Physical Inactivity and Time Watching Television: A Cross-Sectional Study with the Brazilian Population PNS, 2013
Authors: Margareth Guimaraes Lima, Marilisa Berti A. Barros, Deborah Carvalho Malta
Abstract:
The relationship between physical activity (PA) and depression has been investigated, in both, observational and clinical studies: PA can integrate the treatments for depression; the physical inactivity (PI) may contribute to increase depression symptoms; and on the other hand, emotional problems can decrease PA. The main of this study was analyze the association among leisure and transportation PI and time watching television (TV) according to depression (minor and major), evaluated with the Patient Health Questionnaire (PHQ-9). The association was also analyzed by gender. This is a cross-sectional study. Data were obtained from the National Health Survey 2013 (PNS), performed with representative sample of the Brazilian adult population, in 2013. The PNS collected information from 60,202 individuals, aged 18 years or more. The independent variable were: leisure time physical inactivity (LTPI), considering inactive or insufficiently actives (categories were linked for analyzes), those who do not performed a minimum of 150 or 74 minutes of moderate or vigorous LTPA, respectively, by week; transportation physical inactivity (TPI), individuals who did not reached 150 minutes, by week, travelling by bicycle or on foot to work or other activities; daily time watching TV > 5 hours. The principal independent variable was depression, identified by PHQ-9. Individuals were classified with major depression, with > 5 symptoms, more than seven days, but one of the symptoms was “depressive mood” or “lack of interest or pleasure”. The others had minor depression. The variables used to adjustment were gender, age, schooling and chronic disease. The prevalence of LTPI, TPI and TV time were estimated according to depression, and differences were tested with Chi-Square test. Adjusted prevalence ratios were estimated using multiple Poisson regression models. The analyzes also had stratification by gender. Mean age of the studied population was 42.9 years old (CI95%:42.6-43.2) and 52.9% were women. 77.5% and 68.1% were inactive or insufficiently active in leisure and transportation, respectively and 13.3% spent time watching TV 5 > hours. 6% and 4.1% of the Brazilian population were diagnosed with minor or major depression. LTPI prevalence was 5% and 9% higher among individuals with minor and major depression, respectively, comparing with no depression. The prevalence of TPI was 7% higher in those with major depression. Considering larger time watching TV, the prevalence was 45% and 74% higher among those with minor and major depression, respectively. Analyzing by gender, the associations were greater in men than women and TPI was note be associated, in women. The study detected the higher prevalence of leisure time physical inactivity and, especially, time spent watching TV, among individuals with major and minor depression, after to adjust for a number of potential confounding factors. TPI was only associated with major disorders and among men. Considering the cross-sectional design of the research, these associations can point out the importance of the mental problems control of the population to increase PA and decrease the sedentary lifestyle; on the other hand, the study highlight the need of interventions by encouraging people with depression, to practice PA, even to transportation.Keywords: depression, physical activity, PHQ-9, sedentary lifestyle
Procedia PDF Downloads 15657 Adaptability in Older People: A Mixed Methods Approach
Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger
Abstract:
Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.Keywords: adaptability, intervention study, mixed methods, nursing home residents
Procedia PDF Downloads 14856 Lessons Learnt from Tutors’ Perspectives on Online Tutorial’s Policies in Open and Distance Education Institution
Authors: Durri Andriani, Irsan Tahar, Lilian Sarah Hiariey
Abstract:
Every institution has to develop, implement, and control its policies to ensure the effectiveness of the institution. In doing so, all related stakeholders have to be involved to maximize the benefit of the policies and minimize the potential constraints and resistances. Open and distance education (ODE) institution is no different. As an education institution, ODE institution has to focus their attention to fulfilling academic needs of their students through open and distance measures. One of them is quality learning support system. Significant stakeholders in learning support system are tutors since they are the ones who directly communicate with students. Tutors are commonly seen as objects whose main responsibility is limited to implementing policies decided by management in ODE institutions. Nonetheless, tutors’ perceptions of tutorials are believed to influence tutors’ performances in facilitating learning support. It is therefore important to analyze tutors’ perception on various aspects of learning support. This paper presents analysis of tutors’ perceptions on policies of tutoriala in ODE institution using Policy Analysis Framework (PAF) modified by King, Nugent, Russell, and Lacy. Focus of this paper is on on-line tutors, those who provide tutorials via Internet. On-line tutors were chosen to stress the increasingly important used of Internet in ODE system. The research was conducted in Universitas Terbuka (UT), Indonesia. UT is purposely selected because of its large number (1,234) of courses offered and large area coverage (6000 inhabited islands). These posed UT in a unique position where learning support system has, to some extent, to be standardized while at the same time it has to be able to cater the needs of different courses in different places for students with different backgrounds. All 598 listed on-line tutors were sent the research questionnaires. Around 20% of the email addresses could not be reached. Tutors were asked to fill out open-ended questionnaires on their perceptions on definition of on-line tutorial, roles of tutors and students in on-line tutorials, requirement for on-line tutors, learning materials, and student evaluation in on-line tutorial. Data analyzed was gathered from 40 on-line tutors who sent back filled-out questionnaires. Data were analyzed qualitatively using content analysis from all 40 tutors. The results showed that using PAF as entry point in choosing learning support services as area of policy with delivery learning materials as the issue at UT has been able to provide new insights of aspects need to be consider in formulating policies in online tutorial and in learning support services. Involving tutors as source of information could be proven to be productive. In general, tutors had clear understanding about definition of online tutorial, roles of tutors and roles of students, and requirement of tutor. Tutors just need to be more involved in the policy formulation since they could provide data on students and problem faced in online tutorial. However, tutors need an adjustment in student evaluation which according tutors too focus on administrative aspects and subjective.Keywords: distance education, on-line tutorial, tutorial policy, tutors’ perspectives
Procedia PDF Downloads 25355 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 14154 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 17353 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 24552 Nursery Treatments May Improve Restoration Outcomes by Reducing Seedling Transplant Shock
Authors: Douglas E. Mainhart, Alejandro Fierro-Cabo, Bradley Christoffersen, Charlotte Reemts
Abstract:
Semi-arid ecosystems across the globe have faced land conversion for agriculture and resource extraction activities, posing a threat to the important ecosystem services they provide. Revegetation-centered restoration efforts in these regions face low success rates due to limited soil water availability and high temperatures leading to elevated seedling mortality after planting. Typical methods to alleviate these stresses require costly post-planting interventions aimed at improving soil moisture status. We set out to evaluate the efficacy of applying in-nursery treatments to address transplant shock. Four native Tamaulipan thornscrub species were compared. Three treatments were applied: elevated CO2, drought hardening (four-week exposure each), and antitranspirant foliar spray (the day prior to planting). Our goal was to answer two primary questions: (1) Do treatments improve survival and growth of seedlings in the early period post-planting? (2) If so, what underlying physiological changes are associated with this improved performance? To this end, we measured leaf gas exchange (stomatal conductance, light saturated photosynthetic rate, water use efficiency), leaf morphology (specific leaf area), and osmolality before and upon the conclusion of treatments. A subset of seedlings from all treatments have been planted, which will be monitored in coming months for in-field survival and growth.First month field survival for all treatment groups were high due to ample rainfall following planting (>85%). Growth data was unreliable due to high herbivory (68% of all sampled plants). While elevated CO2 had infrequent or no detectable influence on all aspects of leaf gas exchange, drought hardening reduced stomatal conductance in three of the four species measured without negatively impacting photosynthesis. Both CO2 and drought hardening elevated leaf osmolality in two species. Antitranspirant application significantly reduced conductance in all species for up to four days and reduced photosynthesis in two species. Antitranspirants also increased the variability of water use efficiency compared to controls. Collectively, these results suggest that antitranspirants and drought hardening are viable treatments for reducing short-term water loss during the transplant shock period. Elevated CO2, while not effective at reducing water loss, may be useful for promoting more favorable water status via osmotic adjustment. These practices could improve restoration outcomes in Tamaulipan thornscrub and other semi-arid systems. Further research should focus on evaluating combinations of these treatments and their species-specific viability.Keywords: conservation, drought conditioning, semi-arid restoration, plant physiology
Procedia PDF Downloads 8651 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control
Authors: Mohamed Elmeshad
Abstract:
During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.Keywords: business, mass media, political economy, power, privatisation
Procedia PDF Downloads 227