Search results for: model based clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38056

Search results for: model based clustering

5416 Application of Metric Dimension of Graph in Unraveling the Complexity of Hyperacusis

Authors: Hassan Ibrahim

Abstract:

The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. We constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.

Keywords: auditory condition, connected graph, hyperacusis, metric dimension

Procedia PDF Downloads 39
5415 Effect of Black Locust Trees on the Nitrogen Dynamics of Black Pine Trees in Shonai Coastal Forest, Japan

Authors: Kazushi Murata, Fabian Watermann, O. B. Herve Gonroudobou, Le Thuy Hang, Toshiro Yamanaka, M. Larry Lopez C.

Abstract:

Aims: Black pine coastal forests play an important role as a windbreak and as a natural barrier to sand and salt spray inland in Japan. The recent invasion of N₂-fxing black locust (Robinia pseudoacacia) trees in these forests is expected to have a nutritional contribution to black pine trees growth. Thus, the effect of this new source of N on black pine trees' N assimilation needs to be assessed. Methods: In order to evaluate this contribution, tree-ring isotopic composition (δ¹⁵N) and nitrogen content (%N) of black pine (Pinus thunbergii) trees in a pure stand (BPP) and a mixed stand (BPM) with black locust (BL) trees were measured for the period 2000–2019 for BPP and BL and 1990–2019 for BPM. The same measurements were conducted in plant tissues and in soil samples. Results: The tree ring δ15N values showed that for the last 30 years, BPM trees gradually switched from BPP to BL-derived soil N starting in the 1990s, becoming the dominant N source from 2000 as no significant diference was found between BPM and BL tree ring δ¹⁵N values from 2000 to 2019. No difference in root and sapwood BPM and BL δ¹⁵N values were found, but BPM foliage (−2.1‰) was different to BPP (−4.4‰) and BL (−0.3‰), which is related to the different N assimilation pathways between BP and BL. Conclusions: Based on the results of this study, the assimilation of BL-derived N inferred from the BPM tissues' δ¹⁵N values is the result of an increase in soil bioavailable N with a higher δ¹⁵N value.

Keywords: nitrogen-15, N₂-fxing species, mixed stand, soil, tree rings

Procedia PDF Downloads 65
5414 Microgravity, Hydrological and Metrological Monitoring of Shallow Ground Water Aquifer in Al-Ain, UAE

Authors: Serin Darwish, Hakim Saibi, Amir Gabr

Abstract:

The United Arab Emirates (UAE) is situated within an arid zone where the climate is arid and the recharge of the groundwater is very low. Groundwater is the primary source of water in the United Arab Emirates. However, rapid expansion, population growth, agriculture, and industrial activities have negatively affected these limited water resources. The shortage of water resources has become a serious concern due to the over-pumping of groundwater to meet demand. In addition to the deficit of groundwater, the UAE has one of the highest per capita water consumption rates in the world. In this study, a combination of time-lapse measurements of microgravity and depth to groundwater level in selected wells in Al Ain city was used to estimate the variations in groundwater storage. Al-Ain is the second largest city in Abu Dhabi Emirates and the third largest city in the UAE. The groundwater in this region has been overexploited. Relative gravity measurements were acquired using the Scintrex CG-6 Autograv. This latest generation gravimeter from Scintrex Ltd provides fast, precise gravity measurements and automated corrections for temperature, tide, instrument tilt and rejection of data noise. The CG-6 gravimeter has a resolution of 0.1μGal. The purpose of this study is to measure the groundwater storage changes in the shallow aquifers based on the application of microgravity method. The gravity method is a nondestructive technique that allows collection of data at almost any location over the aquifer. Preliminary results indicate a possible relationship between microgravity and water levels, but more work needs to be done to confirm this. The results will help to develop the relationship between monthly microgravity changes with hydrological and hydrogeological changes of shallow phreatic. The study will be useful in water management considerations and additional future investigations.

Keywords: Al-Ain, arid region, groundwater, microgravity

Procedia PDF Downloads 153
5413 Tectono-Thermal Evolution of Ningwu-Jingle Basin in North China Craton: Constraints from Apatite (U–Th-Sm)/He and Fission Track Thermochronology

Authors: Zhibin Lei, Minghui Yang

Abstract:

Ningwu-Jingle basin is a structural syncline which has undergone a complex tectono-thermal history since Cretaceous. It stretches along the strike of the northern Lvliang Mountains which are the most important mountains in the middle and west of North China Craton. The Mesozoic units make up of the core of Ningwu-Jingle Basin, with pre-Mesozoic units making up of its flanks. The available low-temperature thermochronology implies that Ningwu-Jingle Basin has experienced two stages of uplifting: 94±7Ma to 111±8Ma (Albian to Cenomanian) and 62±4 to 75±5Ma (Danian to Maastrichtian). In order to constrain its tectono-thermal history in the Cenozoic, both apatite (U-Th-Sm)/He and fission track dating analysis are applied on 3 Middle Jurassic and 3 Upper Triassic sandstone samples. The central fission track ages range from 74.4±8.8Ma to 66.0±8.0Ma (Campanian to Maastrichtian) which matches well with previous data. The central He ages range from 20.1±1.2Ma to 49.1±3.0Ma (Ypresian to Burdigalian). Inverse thermal modeling is established based on both apatite fission track data and (U-Th-Sm)/He data. The thermal history obtained reveals that all 6 sandstone samples cross the high-temperature limit of fission track partial annealing zone by the uppermost Cretaceous and that of He partial retention zone by the uppermost Eocene to the early Oligocene. The result indicates that the middle and west of North China Craton is not stable in the Cenozoic.

Keywords: apatite fission track thermochronology, apatite (u–th)/he thermochronology, Ningwu-Jingle basin, North China craton, tectono-thermal history

Procedia PDF Downloads 262
5412 Ferrites of the MeFe2O4 System (Me – Zn, Cu, Cd) and Their Two Faces

Authors: B. S. Boyanov, A. B. Peltekov, K. I. Ivanov

Abstract:

The ferrites of Zn, Cd, Cu, and mixed ferrites with NiO, MnO, MgO, CoO, ZnO, BaO combine the properties of dielectrics, semiconductors, ferro-magnets, catalysts, etc. The ferrites are used in an impressive range of applications due to their remarkable properties. A specific disadvantage of ferrites is that they are undesirably obtained in a lot of processes connected with metal production. They are very stable and poorly soluble compounds. The obtained ZnFe2O4 in zinc production connecting about 15% of the total zinc remains practically insoluble in dilute solutions of sulfuric acid. This decreases the degree of recovery of zinc and necessitates to further process the zinc-containing cake. In this context, the ferrites; ZnFe2O4, CdFe2O4, and CuFe2O4 are synthesized in laboratory conditions using ceramic technology. Their homogeneity and structure are proven by X-Ray diffraction analysis and Mössbauer spectroscopy. The synthesized ferrites are subjected to strong acid and high temperature leaching with solutions of H2SO4, HCl, and HNO3 (7, 10 and 15 %). The results indicate that the highest degree of leaching of Zn, Cd, and Cu from the ferrites is achieved by use of HCl. The resulting values for the degree of leaching of metals using H2SO4 are lower, but still remain significantly higher for all of the experimental conditions compared to the values obtained using HNO3. Five zinc sulfide concentrates are characterized for iron content by chemical analysis, Web-based Information System, and iron phases by Mössbauer spectroscopy. The charging was optimized using the criterion of minimal amount of zinc ferrite produced when roasting the concentrates in a fluidized bed. The results obtained are interpreted in terms of the hydrometallurgical zinc production and maximum recovery of zinc, copper and cadmium from initial zinc sulfide concentrates after their roasting.

Keywords: hydrometallurgy, inorganic acids, solubility, zinc ferrite

Procedia PDF Downloads 436
5411 Protection of Steel Bars in Reinforce Concrete with Zinc Based Coverings

Authors: Hamed Rajabzadeh Gatabi, Soroush Dastgheibifard, Mahsa Asnafi

Abstract:

There is no doubt that reinforced concrete is known as one of the most significant materials which is used in construction industry for many years. Although, some natural elements in dealing with environment can contribute to its corrosion or failure. One of which is bar or so-called reinforcement failure. So as to combat this problem, one of the oxidization prevention methods investigated was the barrier protection method implemented over the application of an organic coating, specifically fusion-bonded epoxy. In this study comparative method is prepared on two different kinds of covered bars (zinc-riches epoxy and polyamide epoxy coated bars) and also uncoated bar. With the aim of evaluate these reinforced concretes, the stickiness, toughness, thickness and corrosion performance of coatings were compared by some tools like Cu/CuSo4 electrodes, EIS and etc. Different types of concretes were exposed to the salty environment (NaCl 3.5%) and their durability was measured. As stated by the experiments in research and investigations, thick coatings (named epoxies) have acceptable stickiness and strength. Polyamide epoxy coatings stickiness to the bars was a bit better than that of zinc-rich epoxy coatings; nonetheless it was stiffer than the zinc rich epoxy coatings. Conversely, coated bars with zinc-rich epoxy showed more negative oxidization potentials, which take revenge protection of bars by zinc particles. On the whole, zinc-rich epoxy coverings is more corrosion-proof than polyamide epoxy coatings due to consuming zinc elements and some other parameters, additionally if the epoxy coatings without surface defects are applied on the rebar surface carefully, it can be said that the life of steel structures is subjected to increase dramatically.

Keywords: surface coating, epoxy polyamide, reinforce concrete bars, salty environment

Procedia PDF Downloads 289
5410 Improving Sample Analysis and Interpretation Using QIAGENs Latest Investigator STR Multiplex PCR Assays with a Novel Quality Sensor

Authors: Daniel Mueller, Melanie Breitbach, Stefan Cornelius, Sarah Pakulla-Dickel, Margaretha Koenig, Anke Prochnow, Mario Scherer

Abstract:

The European STR standard set (ESS) of loci as well as the new expanded CODIS core loci set as recommended by the CODIS Core Loci Working Group, has led to a higher standardization and harmonization in STR analysis across borders. Various multiplex PCRs assays have since been developed for the analysis of these 17 ESS or 23 CODIS expansion STR markers that all meet high technical demands. However, forensic analysts are often faced with difficult STR results and the questions thereupon. What is the reason that no peaks are visible in the electropherogram? Did the PCR fail? Was the DNA concentration too low? QIAGEN’s newest Investigator STR kits contain a novel Quality Sensor (QS) that acts as internal performance control and gives useful information for evaluating the amplification efficiency of the PCR. QS indicates if the reaction has worked in general and furthermore allows discriminating between the presence of inhibitors or DNA degradation as a cause for the typical ski slope effect observed in STR profiles of such challenging samples. This information can be used to choose the most appropriate rework strategy.Based on the latest PCR chemistry called FRM 2.0, QIAGEN now provides the next technological generation for STR analysis, the Investigator ESSplex SE QS and Investigator 24plex QS Kits. The new PCR chemistry ensures robust and fast PCR amplification with improved inhibitor resistance and easy handling for a manual or automated setup. The short cycling time of 60 min reduces the duration of the total PCR analysis to make a whole workflow analysis in one day more likely. To facilitate the interpretation of STR results a smart primer design was applied for best possible marker distribution, highest concordance rates and a robust gender typing.

Keywords: PCR, QIAGEN, quality sensor, STR

Procedia PDF Downloads 496
5409 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 147
5408 A Review of Kinematics and Joint Load Forces in Total Knee Replacements Influencing Surgical Outcomes

Authors: Samira K. Al-Nasser, Siamak Noroozi, Roya Haratian, Adrian Harvey

Abstract:

A total knee replacement (TKR) is a surgical procedure necessary when there is severe pain and/or loss of function in the knee. Surgeons balance the load in the knee and the surrounding soft tissue by feeling the tension at different ranges of motion. This method can be unreliable and lead to early failure of the joint. The ideal kinematics and load distribution have been debated significantly based on previous biomechanical studies surrounding both TKRs and normal knees. Intraoperative sensors like VERASENSE and eLibra have provided a method for the quantification of the load indicating a balanced knee. A review of the literature written about intraoperative sensors and tension/stability of the knee was done. Studies currently debate the quantification of the load in medial and lateral compartments specifically. However, most research reported that following a TKR the medial compartment was loaded more heavily than the lateral compartment. In several cases, these results were shown to increase the success of the surgery because they mimic the normal kinematics of the knee. In conclusion, most research agrees that an intercompartmental load differential of between 10 and 20 pounds, where the medial load was higher than the lateral, and an absolute load of less than 70 pounds was ideal. However, further intraoperative sensor development could help improve the accuracy and understanding of the load distribution on the surgical outcomes in a TKR. A reduction in early revision surgeries for TKRs would provide an improved quality of life for patients and reduce the economic burden placed on both the National Health Service (NHS) and the patient.

Keywords: intraoperative sensors, joint load forces, kinematics, load balancing, and total knee replacement

Procedia PDF Downloads 136
5407 Assessing the Feasibility of Commercial Meat Rabbit Production in the Kumasi Metropolis of Ghana

Authors: Nana Segu Acquaah-Harrison, James Osei Mensah, Richard Aidoo, David Amponsah, Amy Buah, Gilbert Aboagye

Abstract:

The study aimed at assessing the feasibility of commercial meat rabbit production in the Kumasi Metropolis of Ghana. Structured and unstructured questionnaires were utilized in obtaining information from two hundred meat consumers and 15 meat rabbit farmers. Data were analyzed using Net Present Value (NPV), Internal Rate of Return (IRR), Benefit Cost Ratio (BCR)/Profitability Index (PI) technique, percentages and chi-square contingency test. The study found that the current demand for rabbit meat is low (36%). The desirable nutritional attributes of rabbit meat and other socio economic factors of meat consumers make the potential demand for rabbit meat high (69%). It was estimated that GH¢5,292 (approximately $ 2672) was needed as a start-up capital for a 40-doe unit meat rabbit farm in Kumasi Metropolis. The cost of breeding animals, housing and equipment formed 12.47%, 53.97% and 24.87% respectively of the initial estimated capital. A Net Present Value of GH¢ 5,910.75 (approximately $ 2984) was obtained at the end of the fifth year, with an internal rate return and profitability index of 70% and 1.12 respectively. The major constraints identified in meat rabbit production were low price of rabbit meat, shortage of fodder, pest and diseases, high cost of capital, high cost of operating materials and veterinary care. Based on the analysis, it was concluded that meat rabbit production is feasible in the Kumasi Metropolis of Ghana. The study recommends embarking on mass advertisement; farmer association and adapting to new technologies in the production process will help to enhance productivity.

Keywords: feasibility, commercial meat rabbit, production, Kumasi, Ghana

Procedia PDF Downloads 133
5406 CO₂/CH₄ Exchange Studies on Shales to Assess the Potential for CO₂ Storage and Enhanced Shale Gas Recovery

Authors: Mateusz Kudasik, Katarzyna Kozieł

Abstract:

The work included detailed studies of CO₂/CH₄ exchange on a shale core from the Lewino-1G2 well (Poland) from a depth of 3408 m. The sample permeability coefficients were determined under conditions of confining pressure from 5 MPa to 35 MPa. These studies showed that at a confining pressure of 35 MPa – corresponding to a depth of about 1000 m, the shale core was impermeable in the direction perpendicular to the bedding, and in the direction parallel to the bedding, the sample had very low permeability (k∞=0.001 mD). The sorption tests performed showed low sorption capacities, which amounted to a maximum of 1.28 cm³/g in relation to CO₂ and 0.87 cm³/g to CH₄ at a pressure of 1.4 MPa. The most important study used to assess the possibilities of CO₂ storage and gas recovery from shale rocks were the CO₂/CH₄ exchange experiments, which were carried out under confining pressure conditions of 5 MPa and 30 MPa. These experiments were carried out on a unique apparatus, which makes it possible to apply a confining pressure corresponding to in situ conditions. The obtained results made it possible to carry out a comprehensive balance of gas exchange during the injection of CO₂ into the shale sample, with simultaneous recovery of CH₄. Based on the conducted sorption and gas exchange studies on the core sample under confining pressure conditions, it was found that in situ conditions, at the depths of shale gas occurrence in Poland of 3000-4000 m, where the confining pressure can be about 100 MPa: (i) poorly developed pore structure, (ii) very low permeability, and (iii) low sorption properties, make shale rocks poorly predisposed to the application of CO₂ storage technology with simultaneous recovery of CH₄. Without the stimulation of CO₂/CH₄ exchange rates through fracturing processes, the effectiveness of CO₂-ESGR technology on shale rock is very low. The research presented in this work is extremely important from the point of view of precise assessment of the potential of CO₂-ESGR technology.

Keywords: shale gas, shale rocks, CO₂/CH₄ exchange, permeability, sorption, CO₂, CH₄

Procedia PDF Downloads 11
5405 Psychological Nano-Therapy: A New Method in Family Therapy

Authors: Siamak Samani, Nadereh Sohrabi

Abstract:

Psychological nano-therapy is a new method based on systems theory. According to the theory, systems with severe dysfunctions are resistant to changes. Psychological nano-therapy helps the therapists to break this ice. Two key concepts in psychological nano-therapy are nano-functions and nano-behaviors. The most important step in psychological nano-therapy in family therapy is selecting the most effective nano-function and nano-behavior. The aim of this study was to check the effectiveness of psychological nano-therapy for family therapy. One group pre-test-post-test design (quasi-experimental Design) was applied for research. The sample consisted of ten families with severe marital conflict. The important character of these families was resistance for participating in family therapy. In this study, sending respectful (nano-function) text massages (nano-behavior) with cell phone were applied as a treatment. Cohesion/respect sub scale from self-report family processes scale and family readiness for therapy scale were used to assess all family members in pre-test and post-test. In this study, one of family members was asked to send a respectful text massage to other family members every day for a week. The content of the text massages were selected and checked by therapist. To compare the scores of families in pre-test and post-test paired sample t-test was used. The results of the test showed significant differences in both cohesion/respect score and family readiness for therapy between per-test and post-test. The results revealed that these families have found a better atmosphere for participation in a complete family therapy program. Indeed, this study showed that psychological nano-therapy is an effective method to make family readiness for therapy.

Keywords: family therapy, family conflicts, nano-therapy, family readiness

Procedia PDF Downloads 659
5404 Effects of Intracerebroventricular Injection of Spexin and Its Interaction with Nitric Oxide, Serotonin, and Corticotropin Receptors on Central Food Intake Regulation in Chicken

Authors: Mohaya Farzin, Shahin Hassanpour, Morteza Zendehdel, Bita Vazir, Ahmad Asghari

Abstract:

Aim: There are several differences between birds and mammals in terms of food intake regulation. Therefore, this study aimed to investigate the effects of the intracerebroventricular (ICV) injection of spexin and its interaction with nitric oxide, serotonin, and corticotropin receptors on central food intake regulation in broiler chickens. Materials and Methods: In experiment 1, chickens received ICV injection of saline, PCPA (p-chlorophenyl alanine,1.25 µg), spexin, and PCPA+spexin. In experiments 2-7, 8-OH-DPAT (5-HT1A agonist, 15.25 nmol), SB-242084 (5-HT2C receptor antagonist, 1.5µg), L-arginine (Precursor of nitric oxide, 200 nmol), L-NAME (nitric oxide synthetase inhibitor, 100 nmol), Astressin-B (CRF1/CRF2 receptor antagonist, 30 µg) and Astressin2-B (CRF2 receptor antagonist, 30 µg) were injected to chickens instead of the PCPA. Then, food intake was measured until 120 minutes after the injection. Results: Spexin significantly decreased food consumption (P<0.05). Concomitant injection of SB-242084+spexin attenuated spexin-induced hypophagia (P<0.05). Co-injection of L-arginine+spexin enhanced spexin-induced hypophagia, and this effect was reversed by L-NAME (P<0.05). Also, concomitant injection of Astressin-B + spexin or Astressin2-B + spexin enhanced spexin-induced hypophagia (P<0.05). Conclusions: Based on these observations, spexin-induced hypophagia may be mediated by nitric oxide and 5-HT2C, CRF1, and CRF2 receptors in neonatal broiler chickens.

Keywords: spexin, serotonin, corticotropin, nitric oxide, food intake, chicken

Procedia PDF Downloads 74
5403 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis

Procedia PDF Downloads 131
5402 The Colorectal Cancer in Patients of Eastern Algeria

Authors: S. Tebibel, C. Mechati, S. Messaoudi

Abstract:

Algeria is currently experiencing the same rate of cancer progression as that registered these last years in the western countries. Colorectal cancer, constituting increasingly a major public health problem, is the most common form of cancer after breast and Neck-womb cancer at the woman and prostate cancer at the man. Our work is based on a retrospective study to determine the cases of colorectal cancer through eastern Algeria. Our goal is to carry out an epidemiological, histological and immune- histochemical study to investigate different techniques for the diagnosis of colorectal cancer and their interests and specific in detecting the disease. The study includes 110 patients (aged between 20 to 87 years) with colorectal cancer where the inclusions and exclusions criteria were established. In our study, colorectal cancer, expresses a male predominance, with a sex ratio of 1, 99 and the most affected age group is between 50 and 59 years. We noted that the colon cancer rate is higher than rectal cancer rate, whose frequencies are respectively 60,91 % and 39,09 %. In the series of colon cancer, the ADK lieberkunien is histological the most represented type, or 85,07 % of all cases. In contrast, the proportion of ADK mucinous (colloid mucous) is only 1,49% only. Well-differentiated ADKS, are very significant in our series, they represent 83,58 % of cases. Adenocarcinoma moderately and poorly differentiated, whose proportions are respectively 2,99 % and 0.05 %. For histological varieties of rectal ADK, we see in our workforce that ADK lieberkunien represent the most common histological form, or 76,74%, while the mucosal colloid is 13,95 %. Research of the mutation on the gene encoding K-ras, a major step in the targeted therapy of colorectal cancers, is underway in our study. Colorectal cancer is the subject of much promising research concern: the evaluation of new therapies (antiangiogenic monoclonal antibodies), the search for predictors of sensitivity to chemotherapy and new prognostic markers using techniques of molecular biology and proteomics.

Keywords: adenocarcinoma, age, colorectal cancer, epidemiology, histological section, sex

Procedia PDF Downloads 344
5401 Women Academics' Insecure Identity at Work: A Millennials Phenomenon

Authors: Emmanouil Papavasileiou, Nikos Bozionelos, Liza Howe-Walsh, Sarah Turnbull

Abstract:

Purpose: The research focuses on women academics’ insecure identity at work and examines its link with generational identity. The aim is to enrich understanding of identities at work as a crucial attribute of managing academics in the context of the proliferation of managerialist controls of audit, accountability, monitoring, and performativity. Methodology: Positivist quantitative methodology was utilized. Data were collected from the Scientific Women's Academic Network (SWAN) Charter. Responses from 155 women academics based in the British Higher Education system were analysed. Findings: Analysis showed high prevalence of strong imposter feelings among participants, suggesting high insecurity at work among women academics in the United Kingdom. Generational identity was related to imposter feelings. In particular, Millennials scored significantly higher than the other generational groups. Research implications: The study shows that imposter feelings are variously manifested among the prevalent generations of women academics, while generational identity is a significant antecedent of such feelings. Research limitations: Caution should be exercised in generalizing the findings to national cultural contexts beyond the United Kingdom. Practical and social implications: Contrary to popular depictions of Millennials as self-centered, narcissistic, materialistic and demanding, women academics who are members of this generational group appear significantly more insecure than the preceding generations. Value: The study provides insightful understandings into women academics’ identity at work as a function of generational identity, and provides a fruitful avenue for further research within and beyond this gender group and profession.

Keywords: academics, generational diversity, imposter feelings, United Kingdom, women, work identity

Procedia PDF Downloads 146
5400 Planning Fore Stress II: Study on Resiliency of New Architectural Patterns in Urban Scale

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Master planning and urban infrastructure’s thoughtful and sequential design strategies will play the major role in reducing the damages of natural disasters, war and or social/population related conflicts for cities. Defensive strategies have been revised during the history of mankind after having damages from natural depressions, war experiences and terrorist attacks on cities. Lessons learnt from Earthquakes, from 2 world war casualties in 20th century and terrorist activities of all times. Particularly, after Hurricane Sandy of New York in 2012 and September 11th attack on New York’s World Trade Centre (WTC) in 21st century, there have been series of serious collaborations between law making authorities, urban planners and architects and defence related organizations to firstly, getting prepared and/or prevent such activities and secondly, reduce the human loss and economic damages to minimum. This study will work on developing a model of planning for New York City, where its citizens will get minimum impacts in threat-full time with minimum economic damages to the city after the stress is passed. The main discussion in this proposal will focus on pre-hazard, hazard-time and post-hazard transformative policies and strategies that will reduce the “Life casualties” and will ease “Economic Recovery” in post-hazard conditions. This proposal is going to scrutinize that one of the key solutions in this path might be focusing on all overlaying possibilities on architectural platforms of three fundamental infrastructures, the transportation, the power related sources and defensive abilities on a dynamic-transformative framework that will provide maximum safety, high level of flexibility and fastest action-reaction opportunities in stressful periods of time. “Planning Fore Stress” is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in deferent cases in international scale. From the modern strategies of Copenhagen for living friendly with nature to traditional approaches of Indonesian old urban planning patterns, the “Iron Dome” of Israel to “Tunnels” in Gaza, from “Ultra-high-performance quartz-infused concrete” of Iran to peaceful and nature-friendly strategies of Switzerland, from “Urban Geopolitics” in cities, war and terrorism to “Design of Sustainable Cities” in the world, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on “New City Divisions”, “New City Planning and social activities” and “New Strategic Architecture for Safe Cities”. This study is a developed version of a proposal that was announced as winner at MoMA in 2013 in call for ideas for Rockaway after Sandy Hurricane took place.

Keywords: urban scale, city safety, natural disaster, war and terrorism, city divisions, architecture for safe cities

Procedia PDF Downloads 484
5399 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 99
5398 Thermal Transport Properties of Common Transition Single Metal Atom Catalysts

Authors: Yuxi Zhu, Zhenqian Chen

Abstract:

It is of great interest to investigate the thermal properties of non-precious metal catalysts for Proton exchange membrane fuel cell (PEMFC) based on the thermal management requirements. Due to the low symmetry of materials, to accurately obtain the thermal conductivity of materials, it is necessary to obtain the second and third order force constants by combining density functional theory and machine learning interatomic potential. To be specific, the interatomic force constants are obtained by moment tensor potential (MTP), which is trained by the computational trajectory of Ab initio molecular dynamics (AIMD) at 50, 300, 600, and 900 K for 1 ps each, with a time step of 1 fs in the AIMD computation. And then the thermal conductivity can be obtained by solving the Boltzmann transport equation. In this paper, the thermal transport properties of single metal atom catalysts are studied for the first time to our best knowledge by machine-learning interatomic potential (MLIP). Results show that the single metal atom catalysts exhibit anisotropic thermal conductivities and partially exhibit good thermal conductivity. The average lattice thermal conductivities of G-FeN₄, G-CoN₄ and G-NiN₄ at 300 K are 88.61 W/mK, 205.32 W/mK and 210.57 W/mK, respectively. While other single metal atom catalysts show low thermal conductivity due to their low phonon lifetime. The results also show that low-frequency phonons (0-10 THz) dominate thermal transport properties. The results provide theoretical insights into the application of single metal atom catalysts in thermal management.

Keywords: proton exchange membrane fuel cell, single metal atom catalysts, density functional theory, thermal conductivity, machine-learning interatomic potential

Procedia PDF Downloads 24
5397 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 102
5396 Quality of Bali Beef and Broiler after Immersion in Liquid Smoke on Different Concentrations and Storage Times

Authors: E. Abustam, M. Yusuf, H. M. Ali, M. I. Said, F. N. Yuliati

Abstract:

The aim of this study was to improve the durability and quality of Bali beef (M. Longissimus dorsi) and broiler carcass through the addition of liquid smoke as a natural preservative. This study was using Longissimus dorsi muscle from male Bali beef aged 3 years, broiler breast and thigh aged 40 days. Three types of meat were marinated in liquid smoke with concentrations of 0, 5, and 10% for 30 minutes at the level of 20% of the sample weight (w/w). The samples were storage at 2-5°C for 1 month. This study designed as a factorial experiment 3 x 3 x 4 based on a completely randomized design with 5 replications; the first factor was meat type (beef, chicken breast and chicken thigh); the 2nd factor was liquid smoke concentrations (0, 5, and 10%), and the 3rd factor was storage duration (1, 2, 3, and 4 weeks). Parameters measured were TBA value, total bacterial colonies, water holding capacity (WHC), shear force value both before and after cooking (80°C – 15min.), and cooking loss. The results showed that the type of meat produced WHC, shear force value, cooking loss and TBA differed between the three types of meat. Higher concentration of liquid smoke, the WHC, shear force value, TBA, and total bacterial colonies were decreased; at a concentration of 10% of liquid smoke, the total bacterial colonies decreased by 57.3% from untreated with liquid smoke. Longer storage, the total bacterial colonies and WHC were increased, while the shear force value and cooking loss were decreased. It can be concluded that a 10% concentration of liquid smoke was able to maintain fat oxidation and bacterial growth in Bali beef and chicken breast and thigh.

Keywords: Bali beef, chicken meat, liquid smoke, meat quality

Procedia PDF Downloads 392
5395 The Intonation of Romanian Greetings: A Sociolinguistics Approach

Authors: Anca-Diana Bibiri, Mihaela Mocanu, Adrian Turculeț

Abstract:

In a language the inventory of greetings is dynamic with frequent input and output, although this is hardly noticed by the speakers. In this register, there are a number of constant, conservative elements that survive different language models (among them, the classic formulae: bună ziua! (good afternoon!), bună seara! (good evening!), noapte bună! (good night!), la revedere! (goodbye!) and a number of items that fail to pass the test of time, according to language use at a time (ciao!, pa!, bai!). The source of innovation depends both of internal factors (contraction, conversion, combination of classic formulae of greetings), and of external ones (borrowings and calques). Their use imposes their frequencies at once, namely the elimination of the use of others. This paper presents a sociolinguistic approach of contemporary Romanian greetings, based on prosodic surveys in two research projects: AMPRom, and SoRoEs. Romanian language presents a rich inventory of questions (especially partial interrogatives questions/WH-Q) which are used as greetings, alone or, more commonly accompanying a proper greeting. The representative of the typical formulae is Ce mai faci? (How are you?), which, unlike its English counterpart How do you do?, has not become a stereotype, but retains an obvious emotional impact, while serving as a mark of sociolinguistic group. The analyzed corpus consists of structures containing greetings recorded in the main Romanian cultural (urban) centers. From the methodological point of view, the acoustic analysis of the recorded data is performed using software tools (GoldWave, Praat), identifying intonation patterns related to three sociolinguistics variables: age, sex and level of education. The intonation patterns of the analyzed statements are at the interface between partial questions and typical greetings.

Keywords: acoustic analysis, greetings, Romanian language, sociolinguistics

Procedia PDF Downloads 337
5394 A Comparative Study: Influences of Polymerization Temperature on Phosphoric Acid Doped Polybenzimidazole Membranes

Authors: Cagla Gul Guldiken, Levent Akyalcin, Hasan Ferdi Gercel

Abstract:

Fuel cells are electrochemical devices which convert the chemical energy of hydrogen into the electricity. Among the types of fuel cells, polymer electrolyte membrane fuel cells (PEMFCs) are attracting considerable attention as non-polluting power generators with high energy conversion efficiencies in mobile applications. Polymer electrolyte membrane (PEM) is one of the essential components of PEMFCs. Perfluorosulfonic acid based membranes known as Nafion® is widely used as PEMs. Nafion® membranes water dependent proton conductivity which limits the operating temperature below 100ᵒC. At higher temperatures, proton conductivity and mechanical stability of these membranes decrease because of dehydration. Polybenzimidazole (PBI), which has good anhydrous proton conductivity after doped with acids, as well as excellent thermal stability, shows great potential in the application of high temperature PEMFCs. In the present study, PBI polymers were synthesized by solution polycondensation at 190 and 210ᵒC. The synthesized polymers were characterized by FTIR, 1H NMR, and TGA. Phosphoric acid doped PBI membranes were prepared and tested in a PEMFC. The influences of reaction temperature on structural properties of synthesized polymers were investigated. Mechanical properties, acid-doping level, proton conductivity, and fuel cell performances of prepared phosphoric acid doped PBI membranes were evaluated. The maximum power density was found as 32.5 mW/cm² at 120ᵒC.

Keywords: fuel cell, high temperature polymer electrolyte membrane, polybenzimidazole, proton exchange membrane fuel cell

Procedia PDF Downloads 185
5393 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy

Procedia PDF Downloads 248
5392 The Relationship between Level of Anxiety and the Development of Children with Growth Hormone Deficiency

Authors: Ewa Mojs, Katarzyna Wiechec, Maia Kubiak, Wlodzimierz Samborski

Abstract:

Interactions between mother’s psychological condition and child’s health status are complex and derive from the nature of the mother-child relationship. The aim of the study was to analyze the issue of anxiety amongst mothers of short children in the aspect of growth hormone therapy. The study was based on a group of 101 mothers of originally short-statured children – 70 with growth hormone deficiency (GHD) treated with recombinant human growth hormone (rhGH) and 31 undergoing the diagnostic process, without any treatment. Collected medical data included child's gender, height and weight, chronological age, bone age delay, and rhGH therapy duration. For all children, the height SDS and BMI SDS were calculated. To evaluate anxiety in mothers, the Spielberger State-Trait Anxiety Inventory (STAI) was used. Obtained results revealed low trait anxiety levels, with no statistically significant differences between the groups. State anxiety levels were average when mothers of all children were analyzed together, but when divided into groups, statistical differences appeared. Mothers of children without diagnosis and treatment had significantly higher levels of state anxiety than mothers of children with GHD receiving appropriate therapy. These results show, that the occurrence of growth failure in children is not related to high maternal trait anxiety, but the lack of diagnosis and lack of appropriate treatment generates higher levels of maternal state anxiety than the process of rh GH therapy in the offspring. Commencement of growth hormone therapy induce a substantial reduction of the state anxiety in mothers, and the duration of treatment causes its further decrease.

Keywords: anxiety, development, growth hormone deficiency, motherhood

Procedia PDF Downloads 281
5391 Performance Augmentation of a Combined Cycle Power Plant with Waste Heat Recovery and Solar Energy

Authors: Mohammed A. Elhaj, Jamal S. Yassin

Abstract:

In the present time, energy crises are considered a severe problem across the world. For the protection of global environment and maintain ecological balance, energy saving is considered one of the most vital issues from the view point of fuel consumption. As the industrial sectors everywhere continue efforts to improve their energy efficiency, recovering waste heat losses provides an attractive opportunity for an emission free and less costly energy resource. In the other hand the using of solar energy has become more insistent particularly after the high gross of prices and running off the conventional energy sources. Therefore, it is essential that we should endeavor for waste heat recovery as well as solar energy by making significant and concrete efforts. For these reasons this investigation is carried out to study and analyze the performance of a power plant working by a combined cycle in which Heat Recovery System Generator (HRSG) gets its energy from the waste heat of a gas turbine unit. Evaluation of the performance of the plant is based on different thermal efficiencies of the main components in addition to the second law analysis considering the exergy destructions for the whole components. The contribution factors including the solar as well as the wasted energy are considered in the calculations. The final results have shown that there is significant exergy destruction in solar concentrator and the combustion chamber of the gas turbine unit. Other components such as compressor, gas turbine, steam turbine and heat exchangers having insignificant exergy destruction. Also, solar energy can contribute by about 27% of the input energy to the plant while the energy lost with exhaust gases can contribute by about 64% at maximum cases.

Keywords: solar energy, environment, efficiency, waste heat, steam generator, performance, exergy destruction

Procedia PDF Downloads 298
5390 Low Temperature Biological Treatment of Chemical Oxygen Demand for Agricultural Water Reuse Application Using Robust Biocatalysts

Authors: Vedansh Gupta, Allyson Lutz, Ameen Razavi, Fatemeh Shirazi

Abstract:

The agriculture industry is especially vulnerable to forecasted water shortages. In the fresh and fresh-cut produce sector, conventional flume-based washing with recirculation exhibits high water demand. This leads to a large water footprint and possible cross-contamination of pathogens. These can be alleviated through advanced water reuse processes, such as membrane technologies including reverse osmosis (RO). Water reuse technologies effectively remove dissolved constituents but can easily foul without pre-treatment. Biological treatment is effective for the removal of organic compounds responsible for fouling, but not at the low temperatures encountered at most produce processing facilities. This study showed that the Microvi MicroNiche Engineering (MNE) technology effectively removes organic compounds (> 80%) at low temperatures (6-8 °C) from wash water. The MNE technology uses synthetic microorganism-material composites with negligible solids production, making it advantageously situated as an effective bio-pretreatment for RO. A preliminary technoeconomic analysis showed 60-80% savings in operation and maintenance costs (OPEX) when using the Microvi MNE technology for organics removal. This study and the accompanying economic analysis indicated that the proposed technology process will substantially reduce the cost barrier for adopting water reuse practices, thereby contributing to increased food safety and furthering sustainable water reuse processes across the agricultural industry.

Keywords: biological pre-treatment, innovative technology, vegetable processing, water reuse, agriculture, reverse osmosis, MNE biocatalysts

Procedia PDF Downloads 129
5389 Political Regimes, Political Stability and Debt Dependence in African Countries of Franc Zone: A Logistic Modeling

Authors: Nounamo Nguedie Yann Harold

Abstract:

The factors behind the debt have been the subject of several studies in the literature. Pioneering studies based on the 'double deficit' approach linked indebtedness to the imbalance between savings and investment, the budget deficit and the current account deficit. Most studies on identifying factors that may stimulate or reduce the level of external public debt agree that the following variables are important explanatory variables in leveraging debt: the budget deficit, trade opening, current account and exchange rate, import, export, interest rate, term variation exchange rate, economic growth rate and debt service, capital flight, and over-indebtedness. Few studies addressed the impact of political factors on the level of external debt. In general, however, the IMF's stabilization programs in developing countries following the debt crisis have resulted in economic recession and the advent of political crises that have resulted in changes in governments. In this sense, political institutions are recognised as factors of accumulation of external debt in most developing countries. This paper assesses the role of political factors on the external debt level of African countries in the Franc Zone over the period 1985-2016. Data used come from World Bank and ICRG. Using a logit in panel, the results show that the more a country is politically stable, the lower the external debt compared to the gross domestic product. Political stability multiplies 1.18% the chances of being in the sustainable debt zone. For example, countries with good political institutions experience less severe external debt burdens than countries with bad political institutions.

Keywords: African countries, external debt, Franc Zone, political factors

Procedia PDF Downloads 219
5388 Isolation and Elimination of Latent and Productive Herpes Simplex Virus from the Sacral and Trigeminal Ganglions

Authors: Bernard L. Middleton, Susan P. Cosgrove

Abstract:

There is an immediate need for alternative anti-herpetic treatment options effective for both primary infections and reoccurring reactivations of herpes simplex virus types 1 (HSV-1) and 2 (HSV-2). Alternatives currently approved for the purposes of clinical administration includes antivirals and a reduced set of nucleoside analogues. The present article tests a treatment based on a systemic understanding of how the herpes virus affects cell inhibition and breakdown and targets different phases of the viral cycle, including the entry stage, reproductive cross mutation, and cell-to-cell infection. The treatment consisted of five immunotherapeutic core compounds (5CC), which were hypothesized to be capable of neutralizing human monoclonal antibodies. The tested 5CC were noted as being functional in the application of eliminating the DNA synthesis of herpes viral interferon (IFN) - induced cellular antiviral response. They were here found to neutralize antiviral reproduction by blocking cell-to-cell infection. The activity of the 5CC was tested on RC-37 in vitro using an assay plaque reduction and in vivo against HSV-1 and HSV-2. The 50% inhibitory concentration (IC50) of 5CC was 0.0009% for HSV-1 plaque formation and 0.0008% for HSV-2 plaque formation. Further tests were performed to evaluate the susceptibility of HSV-1 and HSV-2 to anti-herpetic drugs in Vero cells after virus entry. There were high-level markers of the 5CC virucidal activity in the viral suspension of HSV-1 and HSV-2. These concentrations of the 5CC are nontoxic and reduced plaque formation by 98.2% for HSV-1 and 93.0% for HSV-2. Virus HSV-1 and HSV-2 titers were reduced significantly by 5CC to the point of being negative, ranging 0.01–0.09 in 72%. The results concluded the 5CC as being an effective treatment option for the herpes simplex virus.

Keywords: synergy pharmaceuticals, herpes treatment, herpes cure, synergy pharmaceuticals treatment

Procedia PDF Downloads 241
5387 Contractors Perspective on Causes of Delays in Power Transmission Projects

Authors: Goutom K. Pall

Abstract:

At the very heart of the power system, power transmission (PT) acts as an essential link between power generation and distribution. Timely completion of PT infrastructures is therefore crucial to support the development of power system as a whole. Yet despite the importance, studies on PT infrastructure development projects are embryonic and, hence, PT projects undergoing widespread delays worldwide. These delay factors are idiosyncratic and identifying the critical delay factors is essential if the PT industry professionals are to complete their projects efficiently and within the expected timeframes. This study identifies and categorizes 46 causes of PT project delay under ten major groups using six sector expert’s recommendations studied by a preliminary questionnaire survey. Based on the experts’ strong recommendations, two new groups are introduced in the final questionnaire survey: sector specific factors (SSF) and general factors (GF). SSF pertain to delay factors applicable only to the PT projects, while GF represents less biased samples with shared responsibilities of all project parties involved in a project. The study then uses 112 data samples from the contractors to rank the delay factors using relative importance index (RII). The results reveal that SSF, GF and external factors are the most critical groups, while the highest ranked delay factors include the right of way (RoW) problems of transmission lines (TL), delay in payments, frequent changes in TL routes, poor communication and coordination among the project parties and accessibility to TL tower locations. Finally, recommendations are made to minimize the identified delay. The findings are expected to be of substantial benefit to professionals in minimizing time overrun in PT projects implementation, as well as power generation, power distribution, and non-power linear construction projects worldwide.

Keywords: delay, project delay, power transmission projects, time-overruns

Procedia PDF Downloads 178