Search results for: chain code normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3351

Search results for: chain code normalization

651 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System

Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin

Abstract:

A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.

Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts

Procedia PDF Downloads 123
650 Safeguarding Product Quality through Pre-Qualification of Material Manufacturers: A Ship and Offshore Classification Society's Perspective

Authors: Sastry Y. Kandukuri, Isak Andersen

Abstract:

Despite recent advances in the manufacturing sector, quality issues remain a frequent occurrence, and can result in fatal accidents, equipment downtime, and loss of life. Adequate quality is of high importance in high-risk industries such as sea-going vessels and offshore installations in which third party quality assurance and product control play an important essential role in ensuring manufacturing quality of critical components. Classification societies play a vital role in mitigating risk in these industries by making sure that all the stakeholders i.e. manufacturers, builders, and end users are provided with adequate rules and standards that effectively ensures components produced at a high level of quality based on the area of application and risk of its failure. Quality issues have also been linked to the lack of competence or negligence of stakeholders in supply value chain. However, continued actions and regulatory reforms through modernization of rules and requirements has provided additional tools for purchasers and manufacturers to confront these issues. Included among these tools are updated ‘approval of manufacturer class programs’ aimed at developing and implementing a set of standardized manufacturing quality metrics for use by the manufacturer and verified by the classification society. The establishment and collection of manufacturing and testing requirements described in these programs could provide various stakeholders – from industry to vessel owners – with greater insight into the state of quality at a given manufacturing facility, and allow stakeholders to anticipate better and address quality issues while simultaneously reducing unnecessary failures that are costly to the industry. The publication introduces, explains and discusses critical manufacturing and testing requirements set in a leading class society’s approval of manufacturer regime and its rationale and some case studies.

Keywords: classification society, manufacturing, materials processing, materials testing, quality control

Procedia PDF Downloads 343
649 Relationship of Epidermal Growth Factor Receptor Gene Mutations Andserum Levels of Ligands in Non-Small Cell Lung Carcinoma Patients

Authors: Abdolamir Allameh, Seyyed Mortaza Haghgoo, Adnan Khosravi, Esmaeil Mortaz, Mihan Pourabdollah-Toutkaboni, Sharareh Seifi

Abstract:

Non-Small Cell Lung Carcinoma (NSCLC) is associated with a number of gene mutations in epidermal growth factor receptor (EGFR). The prognostic significance of mutations in exons 19 and 21, together with serum levels of EGFR, amphiregulin (AR), and Transforming Growth Factor-alpha (TGF-α) are implicated in diagnosis and treatment. The aim of this study was to examine the relationship of EGFR mutations in selected exons with the expression of relevant ligands in sera samples of NSCLC patients. For this, a group of NSCLC patients (n=98) referred to the hospital for lung surgery with a mean age of 59±10.5 were enrolled (M/F: 75/23). Blood specimen was collected from each patient. Besides, formalin fixed paraffin embedded tissues were processed for DNA extraction. Gene mutations in exons 19 and 21 were detected by direct sequencing, following DNA amplification which was done by PCR (Polymerase Chain Reaction). Also, serum levels of EGFR, AR, and TGF-α were measured by ELISA. The results of our study show that EGFR mutations were present in 37% of Iranian NSCLC patients. The most frequently identified mutations were deletions in exon 19 (72.2%) and substitutions in exon 21 (27.8%). The most frequently identified alteration, which is considered as a rare mutation, was the E872K mutation in exon 21, which was found in 90% (9 out of 10) cases. EGFR mutation detected in exon 21 was significantly (P<0.05) correlated with the levels of its ligands, EGFR and TGF-α in serum samples. Furthermore, it was found that increased serum AR (>3pg/ml) and TGF-α (>10.5 pg/ml) were associated with shorter overall survival (P<0.05). The results clearly showed a close relationship between EGFR mutations and serum EGFR and serum TGF-α. Increased serum EGFR was associated with TGF-α and AR and linked to poor prognosis of NSCLC. These findings are implicated in clinical decision-making related to EGFR-Tyrosine kinase inhibitors (TKIs).

Keywords: lung cancer, Iranian patients, epidermal growth factor, mutation, prognosis

Procedia PDF Downloads 71
648 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers

Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan

Abstract:

Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.

Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX

Procedia PDF Downloads 127
647 Adsorption of Chlorinated Pesticides in Drinking Water by Carbon Nanotubes

Authors: Hacer Sule Gonul, Vedat Uyak

Abstract:

Intensive use of pesticides in agricultural activity causes mixing of these compounds into water sources with surface flow. Especially after the 1970s, a number of limitations imposed on the use of chlorinated pesticides that have a carcinogenic risk potential and regulatory limit have been established. These chlorinated pesticides discharge to water resources, transport in the water and land environment and accumulation in the human body through the food chain raises serious health concerns. Carbon nanotubes (CNTs) have attracted considerable attention from on all because of their excellent mechanical, electrical, and environmental characteristics. Due to CNT particles' high degree of hydrophobic surfaces, these nanoparticles play critical role in the removal of water contaminants of natural organic matters, pesticides and phenolic compounds in water sources. Health concerns associated with chlorinated pesticides requires the removal of such contaminants from aquatic environment. Although the use of aldrin and atrazine was restricted in our country, repatriation of illegal entry and widespread use of such chemicals in agricultural areas cause increases for the concentration of these chemicals in the water supply. In this study, the compounds of chlorinated pesticides such as aldrin and atrazine compounds would be tried to eliminate from drinking water with carbon nanotube adsorption method. Within this study, 2 different types of CNT would be used including single-wall (SWCNT) and multi-wall (MWCNT) carbon nanotubes. Adsorption isotherms within the scope of work, the parameters affecting the adsorption of chlorinated pesticides in water are considered as pH, contact time, CNT type, CNT dose and initial concentration of pesticides. As a result, under conditions of neutral pH conditions with MWCNT respectively for atrazine and aldrin obtained adsorption capacity of determined as 2.24 µg/mg ve 3.84 µg/mg. On the other hand, the determined adsorption capacity rates for SWCNT for aldrin and atrazine has identified as 3.91 µg/mg ve 3.92 µg/mg. After all, each type of pesticide that provides superior performance in relieving SWCNT particles has emerged.

Keywords: pesticide, drinking water, carbon nanotube, adsorption

Procedia PDF Downloads 160
646 The Effect of Subsurface Dam on Saltwater Intrusion in Heterogeneous Coastal Aquifers

Authors: Antoifi Abdoulhalik, Ashraf Ahmed

Abstract:

Saltwater intrusion (SWI) in coastal aquifers has become a growing threat for many countries around the world. While various control measures have been suggested to mitigate SWI, the construction of subsurface physical barriers remains one of the most effective solutions for this problem. In this work, we used laboratory experiments and numerical simulations to investigate the effectiveness of subsurface dams in heterogeneous layered coastal aquifer with different layering patterns. Four different cases were investigated, including a homogeneous (case H), and three heterogeneous cases in which a low permeability (K) layer was set in the top part of the system (case LH), in the middle part of the system (case HLH) and the bottom part of the system (case HL). Automated image analysis technique was implemented to quantify the main SWI parameters under high spatial and temporal resolution. The method also provides transient salt concentration maps, allowing for the first time clear visualization of the spillage of saline water over the dam (advancing wedge condition) as well as the flushing of residual saline water from the freshwater area (receding wedge condition). The SEAWAT code was adopted for the numerical simulations. The results show that the presence of an overlying layer of low permeability enhanced the ability of the dam to retain the saline water. In such conditions, the rate of saline water spillage and inland extension may considerably be reduced. Conversely, the presence of an underlying low K layer led to a faster increase of saltwater volume on the seaward side of the wall, therefore considerably facilitating the spillage. The results showed that a complete removal of the residual saline water eventually occurred in all the investigated scenarios, with a rate of removal strongly affected by the hydraulic conductivity of the lower part of the aquifer. The data showed that the addition of the underlying low K layer in case HL caused the complete flushing to be almost twice longer than in the homogeneous scenario.

Keywords: heterogeneous coastal aquifers, laboratory experiments, physical barriers, seawater intrusion control

Procedia PDF Downloads 241
645 A Next Generation Multi-Scale Modeling Theatre for in silico Oncology

Authors: Safee Chaudhary, Mahnoor Naseer Gondal, Hira Anees Awan, Abdul Rehman, Ammar Arif, Risham Hussain, Huma Khawar, Zainab Arshad, Muhammad Faizyab Ali Chaudhary, Waleed Ahmed, Muhammad Umer Sultan, Bibi Amina, Salaar Khan, Muhammad Moaz Ahmad, Osama Shiraz Shah, Hadia Hameed, Muhammad Farooq Ahmad Butt, Muhammad Ahmad, Sameer Ahmed, Fayyaz Ahmed, Omer Ishaq, Waqar Nabi, Wim Vanderbauwhede, Bilal Wajid, Huma Shehwana, Muhammad Tariq, Amir Faisal

Abstract:

Cancer is a manifestation of multifactorial deregulations in biomolecular pathways. These deregulations arise from the complex multi-scale interplay between cellular and extracellular factors. Such multifactorial aberrations at gene, protein, and extracellular scales need to be investigated systematically towards decoding the underlying mechanisms and orchestrating therapeutic interventions for patient treatment. In this work, we propose ‘TISON’, a next-generation web-based multiscale modeling platform for clinical systems oncology. TISON’s unique modeling abstraction allows a seamless coupling of information from biomolecular networks, cell decision circuits, extra-cellular environments, and tissue geometries. The platform can undertake multiscale sensitivity analysis towards in silico biomarker identification and drug evaluation on cellular phenotypes in user-defined tissue geometries. Furthermore, integration of cancer expression databases such as The Cancer Genome Atlas (TCGA) and Human Proteome Atlas (HPA) facilitates in the development of personalized therapeutics. TISON is the next-evolution of multiscale cancer modeling and simulation platforms and provides a ‘zero-code’ model development, simulation, and analysis environment for application in clinical settings.

Keywords: systems oncology, cancer systems biology, cancer therapeutics, personalized therapeutics, cancer modelling

Procedia PDF Downloads 210
644 Effectiveness of Gamified Virtual Physiotherapy Patients with Shoulder Problems

Authors: A. Barratt, M. H. Granat, S. Buttress, B. Roy

Abstract:

Introduction: Physiotherapy is an essential part of the treatment of patients with shoulder problems. The focus of treatment is usually centred on addressing specific physiotherapy goals, ultimately resulting in the improvement in pain and function. This study investigates if computerised physiotherapy using gamification principles are as effective as standard physiotherapy. Methods: Physiotherapy exergames were created using a combination of commercially available hardware, the Microsoft Kinect, and bespoke software. The exergames used were validated by mapping physiotherapy goals of physiotherapy which included; strength, range of movement, control, speed, and activation of the kinetic chain. A multicenter, randomised prospective controlled trial investigated the use of exergames on patients with Shoulder Impingement Syndrome who had undergone Arthroscopic Subacromial Decompression surgery. The intervention group was provided with the automated sensor-based technology, allowing them to perform exergames and track their rehabilitation progress. The control group was treated with standard physiotherapy protocols. Outcomes from different domains were used to compare the groups. An important metric was the assessment of shoulder range of movement pre- and post-operatively. The range of movement data included abduction, forward flexion and external rotation which were measured by the software, pre-operatively, 6 weeks and 12 weeks post-operatively. Results: Both groups show significant improvement from pre-operative to 12 weeks in elevation in forward flexion and abduction planes. Results for abduction showed an improvement for the interventional group (p < 0.015) as well as the test group (p < 0.003). Forward flexion improvement was interventional group (p < 0.0201) with the control group (p < 0.004). There was however no significant difference between the groups at 12 weeks for abduction (p < 0.118067) , forward flexion (p < 0.189755) or external rotation (p < 0.346967). Conclusion: Exergames may be used as an alternative to standard physiotherapy regimes; however, further analysis is required focusing on patient engagement.

Keywords: shoulder, physiotherapy, exergames, gamification

Procedia PDF Downloads 182
643 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 111
642 Authentication and Traceability of Meat Products from South Indian Market by Species-Specific Polymerase Chain Reaction

Authors: J. U. Santhosh Kumar, V. Krishna, Sebin Sebastian, G. S. Seethapathy, G. Ravikanth, R. Uma Shaanker

Abstract:

Food is one of the basic needs of human beings. It requires the normal function of the body part and a healthy growth. Recently, food adulteration increases day by day to increase the quantity and make more benefit. Animal source foods can provide a variety of micronutrients that are difficult to obtain in adequate quantities from plant source foods alone. Particularly in the meat industry, products from animals are susceptible targets for fraudulent labeling due to the economic profit that results from selling cheaper meat as meat from more profitable and desirable species. This work presents an overview of the main PCR-based techniques applied to date to verify the authenticity of beef meat and meat products from beef species. We were analyzed 25 market beef samples in South India. We examined PCR methods based on the sequence of the cytochrome b gene for source species identification. We found all sample were sold as beef meat as Bos Taurus. However, interestingly Male meats are more valuable high price compare to female meat, due to this reason most of the markets samples are susceptible. We were used sex determination gene of cattle like TSPY(Y-encoded, testis-specific protein TSPY is a Y-specific gene). TSPY homologs exist in several mammalian species, including humans, horses, and cattle. This gene is Y coded testis protein genes, which only amplify the male. We used multiple PCR products form species-specific “fingerprints” on gel electrophoresis, which may be useful for meat authentication. Amplicons were obtained only by the Cattle -specific PCR. We found 13 market meat samples sold as female beef samples. These results suggest that the species-specific PCR methods established in this study would be useful for simple and easy detection of adulteration of meat products.

Keywords: authentication, meat products, species-specific, TSPY

Procedia PDF Downloads 364
641 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 301
640 Profit Share in Income: An Analysis of Its Influence on Macroeconomic Performance

Authors: Alain Villemeur

Abstract:

The relationships between the profit share in income on the one hand and the growth rates of output and employment on the other hand have been studied for 17 advanced economies since 1961. The vast majority (98%) of annual values for the profit share fall between 20% and 40%, with an average value of 33.9%. For the 17 advanced economies, Gross Domestic Product and productivity growth rates tend to fall as the profit share in income rises. For the employment growth rates, the relationships are complex; nevertheless, over long periods (1961-2000), it appears that the more job-creating economies are Australia, Canada, and the United States; they have experienced a profit share close to 1/3. This raises a number of questions, not least the value of 1/3 for the profit share and its role in macroeconomic fundamentals. To explain these facts, an endogenous growth model is developed. This growth and distribution model reconciles the great ideas of Kaldor (economic growth as a chain reaction), of Keynes (effective demand and marginal efficiency of capital) and of Ricardo (importance of the wage-profit distribution) in an economy facing creative destruction. A production function is obtained, depending mainly on the growth of employment, the rate of net investment and the profit share in income. In theory, we show the existence of incentives: an incentive for job creation when the profit share is less than 1/3 and another incentive for job destruction in the opposite case. Thus, increasing the profit share can boost the employment growth rate until it reaches the value of 1/3; otherwise lowers the employment growth rate. Three key findings can be drawn from these considerations. The first reveals that the best GDP and productivity growth rates are obtained with a profit share of less than 1/3. The second is that maximum job growth is associated with a 1/3 profit share, given the existence of incentives to create more jobs when the profit share is less than 1/3 or to destroy more jobs otherwise. The third is the decline in performance (GDP growth rate and productivity growth rate) when the profit share increases. In conclusion, increasing the profit share in income weakens GDP growth or productivity growth as a long-term trend, contrary to the trickle-down hypothesis. The employment growth rate is maximum for a profit share in income of 1/3. All these lessons suggest macroeconomic policies considering the profit share in income.

Keywords: advanced countries, GDP growth, employment growth, profit share, economic policies

Procedia PDF Downloads 54
639 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.

Keywords: business model, capacity, sourcing, synergies

Procedia PDF Downloads 169
638 The Impact of the COVID-19 on the Cybercrimes in Hungary and the Possible Solutions for Prevention

Authors: László Schmidt

Abstract:

Technological and digital innovation is constantly and dynamically evolving, which poses an enormous challenge to both lawmaking and law enforcement. To legislation because artificial intelligence permeates many areas of people’s daily lives that the legislator must regulate. it can see how challenging it is to regulate e.g. self-driving cars/taxis/camions etc. Not to mention cryptocurrencies and Chat GPT, the use of which also requires legislative intervention. Artificial intelligence also poses an extraordinary challenge to law enforcement. In criminal cases, police and prosecutors can make great use of AI in investigations, e.g. in forensics, DNA samples, reconstruction, identification, etc. But it can also be of great help in the detection of crimes committed in cyberspace. In the case of cybercrime, on the one hand, it can be viewed as a new type of crime that can only be committed with the help of information systems, and that has a specific protected legal object, such as an information system or data. On the other hand, it also includes traditional crimes that are much easier to commit with the help of new tools. According to Hungarian Criminal Code section 375 (1), any person who, for unlawful financial gain, introduces data into an information system, or alters or deletes data processed therein, or renders data inaccessible, or otherwise interferes with the functioning of the information system, and thereby causes damage, is guilty of a felony punishable by imprisonment not exceeding three years. The Covid-19 coronavirus epidemic has had a significant impact on our lives and our daily lives. It was no different in the world of crime. With people staying at home for months, schools, restaurants, theatres, cinemas closed, and no travel, criminals have had to change their ways. Criminals were committing crimes online in even greater numbers than before. These crimes were very diverse, ranging from false fundraising, the collection and misuse of personal data, extortion to fraud on various online marketplaces. The most vulnerable age groups (minors and elderly) could be made more aware and prevented from becoming victims of this type of crime through targeted programmes. The aim of the study is to show the Hungarian judicial practice in relation to cybercrime and possible preventive solutions.

Keywords: cybercrime, COVID-19, Hungary, criminal law

Procedia PDF Downloads 54
637 The Restrictions of the Householder’s ‘Double Two-Thirds Principles’ in Decision-Making for Elevators Addition to Existing Condominium

Authors: Haifeng Shi, Kun Song, Yili Zhao

Abstract:

In China, with the extensive promotion of the ‘aging in place’ pension policy as the background, most of the elders will choose to remain in their current homes and communities, finding out of preference or necessity that they will need to remodel their homes to fit their changing needs. This generation elder born in the 1960s to 1970s almost live in the same form of housing-condominium built from 1982 to 2012. Based on the survey of existing multi-family housing, especially in Tianjin, it is found that the current ‘double two-thirds principles’ is becoming the threshold for modification to existing house, particularly in the project of elevators addition to existing condominium (built from 1982 to 2016 without elevators below 6 floors according to the previous building code). Firstly, this article concludes the local policies of elevator addition nationwide, most of which has determined the importance and necessity of the community-based self-organization principle in the operation of the elevator addition. Secondly, by comparing the three existing community management systems (owners' congress, property management system and community committee) in instances, find that the community-based ‘two-thirds’ principle is not conducive to implement for multi-owned property renovation in the community or common accessibility modification in the building. However, analysis the property and other community management related laws, pointing out the shortcomings of the existing community-based ‘two-thirds’ decision-making norms. The analyzation showed that the unit-based and ‘100% principle’ method is more capable of common accessibility in the condominium in China. Differing from existing laws, the unit-based principle will be effective for the process of decision-making and ‘100% principle’ will protect closely profit-related householders for condominium modification in the multi-owned area. These three aspects of the analysis suggest that the establishment of the unit-based self-organization mechanism is a preferred and inevitable method to solve the problem of elevators addition to the existing condominium in China.

Keywords: aging in place, condominium, modification, multi own

Procedia PDF Downloads 141
636 Seismic Evaluation of Multi-Plastic Hinge Design Approach on RC Shear Wall-Moment Frame Systems against Near-Field Earthquakes

Authors: Mohsen Tehranizadeh, Mahboobe Forghani

Abstract:

The impact of higher modes on the seismic response of dual structural system consist of concrete moment-resisting frame and with RC shear walls is investigated against near-field earthquakes in this paper. a 20 stories reinforced concrete shear wall-special moment frame structure is designed in accordance with ASCE7 requirements and The nonlinear model of the structure was performed on OpenSees platform. Nonlinear time history dynamic analysis with 3 near-field records are performed on them. In order to further understand the structural collapse behavior in the near field, the response of the structure at the moment of collapse especially the formation of plastic hinges is explored. The results revealed that the amplification of moment at top of the wall due to higher modes, the plastic hinge can form in the upper part of wall, even when designed and detailed for plastic hinging at the base only (according to ACI code).on the other hand, shear forces in excess of capacity design values can develop due to the contribution of the higher modes of vibration to dynamic response due to the near field can cause brittle shear or sliding failure modes. The past investigation on shear walls clearly shows the dual-hinge design concept is effective at reducing the effects of the second mode of response. An advantage of the concept is that, when combined with capacity design, it can result in relaxation of special reinforcing detailing in large portions of the wall. In this study, to investigate the implications of multi-design approach, 4 models with varies arrangement of hinge plastics at the base and height of the shear wall are considered. results base on time history analysis showed that the dual or multi plastic hinges approach can be useful in order to control the high moment and shear demand of higher mode effect.

Keywords: higher mode effect, Near-field earthquake, nonlinear time history analysis, multi plastic hinge design

Procedia PDF Downloads 419
635 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria

Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar

Abstract:

The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.

Keywords: earthquake, modeling, liquefaction potential, laboratory investigations

Procedia PDF Downloads 347
634 The Effect of a Weed-Killer Sulfonylurea on Durum Wheat (Triticum Durum Desf)

Authors: L. Meksem Amara, M. Ferfar, N. Meksem, M. R. Djebar

Abstract:

The wheat is the cereal the most consumed in the world. In Algeria, the production of this cereal covers only 20 in 25 % of the needs for the country, the rest being imported. To improve the efficiency and the productivity of the durum wheat, the farmers turn to the use of pesticides: weed-killers, fungicides and insecticides. However this use often entrains losses of products more at least important contaminating the environment and all the food chain. Weed-killers are substances developed to control or destroy plants considered unwanted. That they are natural or produced by the human being (molecule of synthesis), the absorption and the metabolization of weed-killers by plants cause the death of these plants. In this work, we set as goal the evaluation of the effect of a weed-killer sulfonylurea, the CossackOD with various concentrations (0, 2, 4 and 9 µg) on variety of Triticum durum: Cirta. We evaluated the plant growth by measuring the leaves and root length, compared with the witness as well as the content of proline and analyze the level of one of the antioxydative enzymes: catalase, after 14 days of treatment. Sulfonylurea is foliar and root weed-killers inhibiting the acetolactate synthase: a vegetable enzyme essential to the development of the plant. This inhibition causes the ruling of the growth then the death. The obtained results show a diminution of the average length of leaves and roots this can be explained by the fact that the ALS inhibitors are more active in the young and increasing regions of the plant, what inhibits the cellular division and talks a limitation of the foliar and root’s growth. We also recorded a highly significant increase in the proline levels and a stimulation of the catalase activity. As a response to increasing the herbicide concentrations a particular increases in antioxidative mechanisms in wheat cultivar Cirta suggest that the high sensitivity of Cirta to this sulfonylurea herbicide is related to the enhanced production and oxidative damage of reactive oxygen species.

Keywords: sulfonylurea, triticum durum, oxydative stress, toxicity

Procedia PDF Downloads 401
633 Biochemical Characterization and Structure Elucidation of a New Cytochrome P450 Decarboxylase

Authors: Leticia Leandro Rade, Amanda Silva de Sousa, Suman Das, Wesley Generoso, Mayara Chagas Ávila, Plinio Salmazo Vieira, Antonio Bonomi, Gabriela Persinoti, Mario Tyago Murakami, Thomas Michael Makris, Leticia Maria Zanphorlin

Abstract:

Alkenes have an economic appeal, especially in the biofuels field, since they are precursors for drop-in biofuels production, which have similar chemical and physical properties to the conventional fossil fuels, with no oxygen in their composition. After the discovery of the first P450 CYP152 OleTJE in 2011, reported with its unique property of decarboxylating fatty acids (FA), by using hydrogen peroxide as a cofactor and producing 1-alkenes as the main product, the scientific and technological interest in this family of enzymes vastly increased. In this context, the present work presents a new decarboxylase (OleTRN) with low similarity with OleTJE (32%), its biochemical characterization, and structure elucidation. As main results, OleTRN presented a high yield of expression and purity, optimum reaction conditions at 35 °C and pH from 6.5 to 8.0, and higher specificity for oleic acid. Besides that, structure-guided mutations were performed and according to the functional characterizations, it was observed that some mutations presented different specificity and chemoselectivity by varying the chain-length of FA substrates from 12 to 20 carbons. These results are extremely interesting from a biotechnological perspective as those characteristics could diversify the applications and contribute to designing better cytochrome P450 decarboxylases. Considering that peroxygenases have the potential activity of decarboxylating and hydroxylating fatty acids and that the elucidation of the intriguing mechanistic involved in the decarboxylation preferential from OleTJE is still a challenge, the elucidation of OleTRN structure and the functional characterizations of OleTRN and its mutants contribute to new information about CYP152. Besides that, the work also contributed to the discovery of a new decarboxylase with a different selectivity profile from OleTJE, which allows a wide range of applications.

Keywords: P450, decarboxylases, alkenes, biofuels

Procedia PDF Downloads 187
632 Symo-syl: A Meta-Phonological Intervention to Support Italian Pre-Schoolers’ Emergent Literacy Skills

Authors: Tamara Bastianello, Rachele Ferrari, Marinella Majorano

Abstract:

The adoption of the syllabic approach in preschool programmes could support and reinforce meta-phonological awareness and literacy skills in children. The introduction of a meta-phonological intervention in preschool could facilitate the transition to primary school, especially for children with learning fragilities. In the present contribution, we want to investigate the efficacy of "Simo-syl" intervention in enhancing emergent literacy skills in children (especially for reading). Simo-syl is a 12 weeks multimedia programme developed for children to improve their language and communication skills and later literacy development in preschool. During the intervention, Simo-syl, an invented character, leads children in a series of meta-phonological games. Forty-six Italian preschool children (i.e., the Simo-syl group) participated in the programme; seventeen preschool children (i.e., the control group) did not participate in the intervention. Children in the two groups were between 4;10 and 5;9 years. They were assessed on their vocabulary, morpho-syntactical, meta-phonological, phonological, and phono-articulatory skills twice: 1) at the beginning of the last year of the preschool through standardised paper-based assessment tools and 2) one week after the intervention. All children in the Simo-syl group took part in the meta-phonological programme based on the syllabic approach. The intervention lasted 12 weeks (three activities per week; week 1: activities focused on syllable blending and spelling and a first approach to the written code; weeks 2-11: activities focused on syllables recognition; week 12: activities focused on vowels recognition). Very few children (Simo-syl group = 21, control group = 9) were tested again (post-test) one week after the intervention. Before starting the intervention programme, the Simo-syl and the control groups had similar meta-phonological, phonological, lexical skills (all ps > .05). One week after the intervention, a significant difference emerged between the two groups in their meta-phonological skills (syllable blending, p = .029; syllable spelling, p = .032), in their vowel recognition ability (p = .032) and their word reading skills (p = .05). An ANOVA confirmed the effect of the group membership on the developmental growth for the word reading task (F (1,28) = 6.83, p = .014, ηp2 = .196). Taking part in the Simo-syl intervention has a positive effect on the ability to read in preschool children.

Keywords: intervention programme, literacy skills, meta-phonological skills, syllabic approach

Procedia PDF Downloads 156
631 Comparison of the Thermal Behavior of Different Crystal Forms of Manganese(II) Oxalate

Authors: B. Donkova, M. Nedyalkova, D. Mehandjiev

Abstract:

Sparingly soluble manganese oxalate is an appropriate precursor for the preparation of nanosized manganese oxides, which have a wide range of technological application. During the precipitation of manganese oxalate, three crystal forms could be obtained – α-MnC₂O₄.2H₂O (SG C2/c), γ-MnC₂O₄.2H₂O (SG P212121) and orthorhombic MnC₂O₄.3H₂O (SG Pcca). The thermolysis of α-MnC₂O₄.2H₂O has been extensively studied during the years, while the literature data for the other two forms has been quite scarce. The aim of the present communication is to highlight the influence of the initial crystal structure on the decomposition mechanism of these three forms, their magnetic properties, the structure of the anhydrous oxalates, as well as the nature of the obtained oxides. For the characterization of the samples XRD, SEM, DTA, TG, DSC, nitrogen adsorption, and in situ magnetic measurements were used. The dehydration proceeds in one step with α-MnC₂O₄.2H2O and γ-MnC₂O₄.2H₂O, and in three steps with MnC₂O₄.3H2O. The values of dehydration enthalpy are 97, 149 and 132 kJ/mol, respectively, and the last two were reported for the first time, best to our knowledge. The magnetic measurements show that at room temperature all samples are antiferomagnetic, however during the dehydration of α-MnC₂O₄.2H₂O the exchange interaction is preserved, for MnC₂O₄.3H₂O it changes to ferromagnetic above 35°C, and for γ-MnC₂O₄.2H₂O it changes twice from antiferomagnetic to ferromagnetic above 70°C. The experimental results for magnetic properties are in accordance with the computational results obtained with Wien2k code. The difference in the initial crystal structure of the forms used determines different changes in the specific surface area during dehydration and different extent of Mn(II) oxidation during decomposition in the air; both being highest at α-MnC₂O₄.2H₂O. The isothermal decomposition of the different oxalate forms shows that the type and physicochemical properties of the oxides, obtained at the same annealing temperature depend on the precursor used. Based on the results from the non-isothermal and isothermal experiments, and from different methods used for characterization of the sample, a comparison of the nature, mechanism and peculiarities of the thermolysis of the different crystal forms of manganese oxalate was made, which clearly reveals the influence of the initial crystal structure. Acknowledgment: 'Science and Education for Smart Growth', project BG05M2OP001-2.009-0028, COST Action MP1306 'Modern Tools for Spectroscopy on Advanced Materials', and project DCOST-01/18 (Bulgarian Science Fund).

Keywords: crystal structure, magnetic properties, manganese oxalate, thermal behavior

Procedia PDF Downloads 164
630 Single and Combined Effects of Diclofenac and Ibuprofen on Daphnia Magna and Some Phytoplankton Species

Authors: Ramatu I. Sha’aba, Mathias A. Chia, Abdullahi B. Alhassan, Yisa A. Gana, Ibrahim M. Gadzama

Abstract:

Globally, Diclofenac (DLC) and Ibuprofen (IBU) are the most prescribed drugs due to their antipyretic and analgesic properties. They are, however, highly toxic at elevated doses, with the involvement of an already described oxidative stress pathway. As a result, there is rising concern about the ecological fate of analgesics on non-target organisms such as Daphnia magna and Phytoplankton species. Phytoplankton is a crucial component of the aquatic ecosystem that serves as the primary producer at the base of the food chain. However, the increasing presence and levels of micropollutants such as these analgesics can disrupt their community structure, dynamics, and ecosystem functions. This study presents a comprehensive series of the physiology, antioxidant response, immobilization, and risk assessment of Diclofenac and Ibuprofen’s effects on Daphnia magna and the Phytoplankton community using a laboratory approach. The effect of DLC and IBU at 27.16 µg/L and 20.89 µg/L, respectively, for a single exposure and 22.39 µg/L for combined exposure of DLC and IBU for the experimental setup. The antioxidant response increased with increasing levels of stress. The highest stressor to the organism was 1000 µg/L of DLC and 10,000 µg/L of IBU. Peroxidase and glutathione -S-transferase activity was higher for Diclofenac + Ibuprofen. The study showed 60% and 70% immobilization of the organism at 1000 g L-1 of DLC and IBU. The two drugs and their combinations adversely impacted Phytoplankton biomass with increased exposure time. However, combining the drugs resulted in more significant adverse effects on physiological and pigment content parameters. The risk assessment calculation for the risk quotient and toxic unit of the analgesic reveals from this study was RQ Diclofenac = 8.41, TU Diclofenac = 3.68, and RQ Ibuprofen = 718.05 and TU Ibuprofen = 487.70. Hence, these findings demonstrate that the current exposure concentrations of Diclofenac and Ibuprofen can immobilize D. magna. This study shows the dangers of multiple drugs in the aquatic environment because their combinations could have additive effects on the structure and functions of Phytoplankton and are capable of immobilizing D. magna.

Keywords: algae, analgesic drug, daphnia magna, toxicity

Procedia PDF Downloads 65
629 Quality Assurance in Higher Education: Doha Institute for Graduate Studies as a Case Study

Authors: Ahmed Makhoukh

Abstract:

Quality assurance (QA) has recently become a common practice, which is endorsed by most Higher Education (HE) institutions worldwide, due to the pressure of internal and external forces. One of the aims of this quality movement is to make the contribution of university education to socio-economic development highly significant. This entails that graduates are currently required have a high-quality profile, i.e., to be competent and master the 21st-century skills needed in the labor market. This wave of change, mostly imposed by globalization, has the effect that university education should be learner-centered in order to satisfy the different needs of students and meet the expectations of other stakeholders. Such a shift of focus on the student learning outcomes has led HE institutions to reconsider their strategic planning, their mission, the curriculum, the pedagogical competence of the academic staff, among other elements. To ensure that the overall institutional performance is on the right way, a QA system should be established to assume this task of checking regularly the extent to which the set of standards of evaluation are strictly respected as expected. This operation of QA has the advantage of proving the accountability of the institution, gaining the trust of the public with transparency and enjoying an international recognition. This is the case of Doha Institute (DI) for Graduate Studies, in Qatar, the object of the present study. The significance of this contribution is to show that the conception of quality has changed in this digital age, and the need to integrate a department responsible for QA in every HE institution to ensure educational quality, enhance learners and achieve academic leadership. Thus, to undertake the issue of QA in DI for Graduate Studies, an elite university (in the academic sense) that focuses on a small and selected number of students, a qualitative method will be adopted in the description and analysis of the data (document analysis). In an attempt to investigate the extent to which QA is achieved in Doha Institute for Graduate Studies, three broad indicators will be evaluated (input, process and learning outcomes). This investigation will be carried out in line with the UK Quality Code for Higher Education represented by Quality Assurance Agency (QAA).

Keywords: accreditation, higher education, quality, quality assurance, standards

Procedia PDF Downloads 142
628 In vivo Activity of Pathogenic Bacteria on Natural Polyphenolic Compounds

Authors: Lubna Azmi, Ila Shukla, Shyam Sundar Gupta, Padam Kant, Ch. V. Rao

Abstract:

Gastric ulcer is a major global health threat, and it is the leading cause of stomach cancer death worldwide. Helicobacter pylori bacteriumis the most important etiologic factor for gastric ulcer. This infection is highly pervasive in South Asian developing countries, especially in India, Nepal, Srilanka etc. due to diversification in geographic area. Pathophysiology of gastric mucosal damage associated with non-invasive bacterium has not justified in detail, but it leads to change in histopathology, immunochemistry of the gastric and duodenal reason of host. The mechanism responsible for bacteria tissue tropism and mucosal damage in stomach variance during the disease is not clearly described and understood scientifically in treatment and control of pathogenic organisms. Polyphenols are secondary metabolites of plants and are generally involved in defense against aggression by pathogens. 2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxychromen-4-one and 1-hydroxy-5,7-dimethoxy-2-naphthalene-carboxaldehyde are polyphenolic compound obtained from popular Indian medicinal plants ghavpatta (ArgeriaspeciosaLinn.f) andBael (Aeglemarmelos) have long been used in traditional Ayurvedic Indian medicine for various diseases. They have promising effects on ulcer, as detailed investigation has made in our laboratory. Therefore, the aim of present study is to explore membrane –dependent morphogenesis of H. pylori and associated apoptosis-mediated cell death. Based on this we analyzed immune gene expression in stomach of experimental animals with H. pylori, using quantitative reverse transcription polymerase chain reaction(q RT-PCR). This revealed rapid induction of prostaglandin, interferon I (INF-I), interferon II (INF-II) and INF-I associated genes in the infected animal. Ultrastructural changes associated with H. pylori will be taken for advanced studies. This investigation shows that the biomarkers eradicate H. pylori bacterium caused gastric ulcer which is a major risk factor for gastric cancer.

Keywords: gastric ulcer, Helicobacter pylori, immunochemistry, polyphenols

Procedia PDF Downloads 363
627 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria

Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick

Abstract:

Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.

Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks

Procedia PDF Downloads 13
626 Navigating Rough Seas: A Qualitative Exploration of National Sociotechnical Imaginaries of Myanmar’s Future Marine Fisheries

Authors: Hannes Groeneweg

Abstract:

Myanmar is considered one of the largest fishing nations in the world. The country’s rapid economic and political reform process since 2011 entails both challenges and opportunities for its marine fishing sector. The development pathway of the sector remains unclear. Which future will eventually materialize is shaped and determined by the various visions and actions of the stakeholders engaging in political debates and decision-making. These visions can be conceptualized through the Science and Technology Studies (STS) concept of sociotechnical imaginaries. The research of this article is guided by the question of which imaginaries are currently relevant, who is propagating these imaginaries, and how are these imaginaries produced and contested. Using qualitative documentary analysis of policy documents, reports, and media articles as well as in-depth interviews with key stakeholders, three archetypical national sociotechnical imaginaries of Myanmar’s future marine fisheries were identified: The industrial scale extractivism imaginary views marine fishing sector as a driver for national economic growth and focuses on the industrial and technological development of the production chain, increasing yield and exports. Sustainable fishing management encompasses the vulnerability of marine ecosystems and views increasing efficient sustainability governance, planning, and management into existing fishing practices. In the traditional sufficiency fishing imaginary, small-scale fishing practices are viewed as an important livelihood practice for millions of coastal dwellers. The need to conserve them through strengthening the self-reliance, autonomy, and resilience of these communities is stressed. In national debates, the first two imaginaries are currently dominant. The imaginaries, as well as their contestations, are also linked to other critical political issues. The paper suggests that participatory decision-making processes are needed to create an inclusive imaginary of the future marine fishing sector.

Keywords: science and technology studies, sociotechnical imaginaries, marine fishing, knowledge coproduction, Myanmar

Procedia PDF Downloads 170
625 RPM-Synchronous Non-Circular Grinding: An Approach to Enhance Efficiency in Grinding of Non-Circular Workpieces

Authors: Matthias Steffan, Franz Haas

Abstract:

The production process grinding is one of the latest steps in a value-added manufacturing chain. Within this step, workpiece geometry and surface roughness are determined. Up to this process stage, considerable costs and energy have already been spent on components. According to the current state of the art, therefore, large safety reserves are calculated in order to guarantee a process capability. Especially for non-circular grinding, this fact leads to considerable losses of process efficiency. With present technology, various non-circular geometries on a workpiece must be grinded subsequently in an oscillating process where X- and Q-axis of the machine are coupled. With the approach of RPM-Synchronous Noncircular Grinding, such workpieces can be machined in an ordinary plung grinding process. Therefore, the workpieces and the grinding wheels revolutionary rate are in a fixed ratio. A non-circular grinding wheel is used to transfer its geometry onto the workpiece. The authors use a worldwide unique machine tool that was especially designed for this technology. Highest revolution rates on the workpiece spindle (up to 4500 rpm) are mandatory for the success of this grinding process. This grinding approach is performed in a two-step process. For roughing, a highly porous vitrified bonded grinding wheel with medium grain size is used. It ensures high specific material removal rates for efficiently producing the non-circular geometry on the workpiece. This process step is adapted by a force control algorithm, which uses acquired data from a three-component force sensor located in the dead centre of the tailstock. For finishing, a grinding wheel with a fine grain size is used. Roughing and finishing are performed consecutively among the same clamping of the workpiece with two locally separated grinding spindles. The approach of RPM-Synchronous Noncircular Grinding shows great efficiency enhancement in non-circular grinding. For the first time, three-dimensional non-circular shapes can be grinded that opens up various fields of application. Especially automotive industries show big interest in the emerging trend in finishing machining.

Keywords: efficiency enhancement, finishing machining, non-circular grinding, rpm-synchronous grinding

Procedia PDF Downloads 273
624 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach

Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier

Abstract:

²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.

Keywords: colloids, mining context, radium, transport

Procedia PDF Downloads 147
623 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 137
622 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 111