sábado, 2 de maio de 2015

REESTRUTURAÇÃO DAS ATIVIDADES LOGÍSTICAS DE UM DEPÓSITO POR MEIO DA IMPLANTAÇÃO DE FERRAMENTAS DE TI

"Em empresas manufatureiras, diante do atual cenário de competição global, é comum a necessidade de adaptação dos processos e operações produtivas visando o aumento da eficiência, a redução de custos e a melhoria na tomada de decisão.

Neste contexto, a tecnologia da informação (TI) surge como um mecanismo essencial para o desenvolvimento das empresas. "


Em trabalho publicado em 2013, procurei propor uma empresa flexível. Flexibilidade é a resposta em qualquer sistema biológico que tenha passado pela evolução. Existem em dilema chamada "stability-plasticity dilemma", usado no contexto da neurociência; um sistema deve aprender, nas não esquecer o que sabe.  Empresas em geral tendem a ser estáveis, mas não flexíveis.  TI pode ser comparado ao corpo humano, com o sistema endócrino, onde mensagem é enviado a do e para todo o corpo, doenças como obesidade e diabetes nascem da perda dessa comunicação. Esse artigo é de grande valor e seria interessante ser unido com o artigo da mesma edição: FáTIMA, A., NASTASI JUNIOR, E., LIMA JUNIOR, F.. UMA FERRAMENTA PARA AVALIAÇÃO DO NÍVEL DE MATURIDADE DA GESTÃO DO CONHECIMENTO ORGANIZACIONAL. Gestão e Saúde, Local de publicação (editar no plugin de tradução o arquivo da citação ABNT), 1, fev. 2015. Disponível em: <http://gestaoesaude.bce.unb.br/index.php/gestaoesaude/article/view/1265>. Acesso em: 01 Mai. 2015.


MATIAS, L., LOURENCINI, R., LIMA JUNIOR, F.. REESTRUTURAÇÃO DAS ATIVIDADES LOGÍSTICAS DE UM DEPÓSITO POR MEIO DA IMPLANTAÇÃO DE FERRAMENTAS DE TI. Gestão e Saúde, Local de publicação (editar no plugin de tradução o arquivo da citação ABNT), 1, fev. 2015. Disponível em: <http://gestaoesaude.bce.unb.br/index.php/gestaoesaude/article/view/1263>. Acesso em: 02 Mai. 2015.

sexta-feira, 1 de maio de 2015

UMA FERRAMENTA PARA AVALIAÇÃO DO NÍVEL DE MATURIDADE DA GESTÃO DO CONHECIMENTO ORGANIZACIONAL

O trabalho basicamente trata da proposta de uma metodologia para gestão de conhecimento, com um estudo de caso. Em geral, conhecimento não é gerenciado por ser considerado de fácil acesso, especialmente na geração da internet. No entanto, empresas que conseguem aprender com o próprio ofício certamente possui alta chance de se destacar. Conhecimento é poder, e poder deve ser administrado. Em uma edição da Scientific American, alguns paises, como o Brasil e o México, foram destacas como problemáticos na união empresa universidade, a gestão de conhecimento, se levada a sério, é um com começo. O artigo é de grande valor e deve ser considerado em pesquisas em geral, não somente em empresas.

FáTIMA, A., NASTASI JUNIOR, E., LIMA JUNIOR, F.. UMA FERRAMENTA PARA AVALIAÇÃO DO NÍVEL DE MATURIDADE DA GESTÃO DO CONHECIMENTO ORGANIZACIONAL. Gestão e Saúde, Local de publicação (editar no plugin de tradução o arquivo da citação ABNT), 1, fev. 2015. Disponível em: <http://gestaoesaude.bce.unb.br/index.php/gestaoesaude/article/view/1265>. Acesso em: 01 Mai. 2015..

PDF


quinta-feira, 16 de abril de 2015

Set points, Settling points, and Bodyweight Regulation

reviewed pagehttp://www.bodyrecomposition.com/fat-loss/set-points-settling-points-and-bodyweight-regulation-part-1.html/


Obesity is turning itself into a global problem, whether it is really true the way they put it or it is an old problem but not taken serious previously, it is something to debate. Below one can see a map of obesity. Some claims that it is a problem of "developed" countries, however  some countries with rich natural resources also soon will face this problem such as Brazil, predicted to assume the 5th position by 2025. 

Obesity Map. The intensity goes from light yellow to red: source: Wikipedia, 2015. 
It is not uncommon people that spend their lives on diets, and in some cases exercises, looking for losing weight. It is not uncommon as well that they fail. Scientifically, it is a challenge. Two theories mainly try to give a glimpse on the "whys". That is to say: set point and settling point theory. 

The set point theory has been around for a longer time and basically it says that the body tries to recover from diet changes by eating more or less, trying to optimize the difference between current body weight and a pre-defined values, that some assumes to varies, with more tendency to increase than decrease. 

One way to study hypotheses is by models, e.g. computer-mathematical simulations. Below is a computer simulation after implemeting the set-point hypothesis. See that food intake is increased in less caloric food and diminished in high caloric food. The same can be done for the settling point hypothesis. Mainly, it states that our body is a dynamical system, it is the byproduct of body and eating and living habits (environment). Both of the theories has defenders and deniers. 

Food intake in a set point hypothesis, by mathematical modelling.
Source: own codes and simulatons

Body weight in a set point hypothesis, by mathematical modelling. 
Source: own codes and simulatons

Body weight in a settling point hypothesis, by mathematical modelling. 
Source: own codes and simulatons

Extra




domingo, 12 de abril de 2015

Mathematical Modeling of Leptin Dynamics and Some Insights into Biomathematics (paper review)

Source: Vander et al.: Human Physiology: The Mechanism of Body Function, Eighth Edition. 





it is  developed a physiologically-based mathematical model for leptin. Leptin is a master hormone in the body that controls hunger and feelings of satiety, however it is a key hormone for controlling metabolism regarding energy expenditure. Leptin is secreted mainly by adipose (fat) tissue, white tissues, so the more overweight a person is, typically, the higher his leptin levels. Leptin does not change just with fat mass, other factors can influence.

In the following scheme we can see the interplay between adipocytes, brain (mainly hypothalamus), and the central nervous system. In essence, the higher the fat tissue mass, the more it is produced leptin, the central nervous system is activated meaning that we have energy on board. This loop might seem unstable, but it can reach equilibrium from two perspectives: when eating balance energy expenditure or when other hormones interferes such as insulin or ghrelin.   

The hormonal control system created by leptin.
Source: Youtube, 2014. 

Source: Youtube, 2014.

Energy Homeostasis, or energy balance, is an aspect of bioenergetics concerning the energy flow through living systems; metabolic regulation in human terms covers the means by which we take in nutrients in discrete meals, and deliver energy as required, varying from moment to moment and from tissue to tissue, in a pattern which may have no relationship at all to the pattern of intake, by metabolism Frayn (2010). Energy homeostasis involves the human body using chemical and neural signals to adjust the amount of energy flows; and to regulate caloric intake by signaling the brain to regulate the sensation of hunger.


"This controversy underscores the fact that, despite the impressive progress made over the past few decades in unraveling many of the molecular pathways involved in energy regulation, we still have a rather murky understanding of how all the pieces fit together to function as an integrated system. Most previous mathematical models of metabolic energy regulation have not explicitly modeled the neuroendocrine feedback system that maintains energy homeostasisIn order to address this deficiency, we have developed a mathematical model that simulates the physiological system that regulates energy metabolism."





Simulation deficient-leptin mouse against a normal one.
Source: own work.
Simulation of a food-control
Source: own work.

Paper reviewed:

Tam, J.; Fukumura, D.;  Jain, R. K.; A mathematical model of murine metabolic regulation by leptin: energy balance and defense of a stable body weight. Cell Metab. 2009 January 7; 9(1): 52–63. doi:10.1016/j.cmet.2008.11.005.

Further


Uluseker, C.; Mathematical Model for Leptin Dynamics. Master of Science Thesis, MathMods Erasmus Mundus M.Sc. Programme, Mathematical Models in Life and Social Sciences. Department of Information Engineering, Computer Science and Mathematics. University of L'Aquila: Italy: 2014. 
Carla Eduarda Machado Romero; Angelina Zanesco. O papel dos hormônios leptina e grelina na gênese da obesidade. Rev. Nutr. vol.19 no.1 Campinas Jan./Feb. 2006. 

References mentioned

Keith N. Frayn, Metabolic Regulation A Human Perspective. Third Edition. wiley-blackwell, 2010.

sábado, 15 de novembro de 2014

A first course in computational physics

==========
Devries, Paul L. ; Hasbun, Javier E. A first course in computational physics. Second edition. Jones and Bartlett Publishers: 2011.
============

Computer have changed the way physics is done, but those changes are only slowly making their way into the typical curriculum. This book is designed to help speed that transition. Computational physics is now widely accepted as a third, equally valid complement to traditional experimental and theoretical physics.
The simple truth is that the computer now permeates our society and has changed the way we think about many things, including science in general and physics in particular. It used to be that there was theoretical physics, which dealt with developing and applying theories, often with an emphasis on mathematics and "rigor." There was experimental physics, which was also concerned with theories, and testing their validity in the laboratory: but was primarily concerned with making observations and quantitative measurements. Now there is computational physics, in which numerical experiments are performed in the computer laboratory - an interesting marriage of the traditional approaches to physics. Canned programs rarely exist for novel, interesting physics, and so we have to write them ourselves. The capability of visualizing a numerical solution as it is being generated is a tremendous tool in understanding both the solution as well as the numerical methods. The integration of such graphics capabilities into the computing environment is one of the many strengths of Matlab.
Matlab®
There is a large variety of computer languages out there that could be ( and are) used in computational physics: BASIC, C, FORTRAN, and JAVA, each has its ardent supporters. These languages have supporters because they all have their particular merits, which should not be lightly dismissed. What sets Matlab apart is its widespread use and acceptance in the Mathematics and particularly Engineering curriculum. As far as computer programming is concerned, this text is primarily concerned with how to perform a scientific calculations, not in how to code a particular statement.
Functions and Roots
A natural place for us to begin our discussion of computational physics is with a discussion of functions. After all, the formal theory of functions underlies virtually all of the scientific theory, and their use is fundamental to any practical method of solving problems. We will discuss some general properties, but always with an eye toward what is computationally applicable. In particular, we will discuss the problem of finding the roots of a function in one dimension. This is important mentioning that Matlab has several built-in functions for determining the roots of functions.
Finding the roots of a function
Closed-form solutions for the roots exist for quadratic, cubic, and quartic exquations as well, although they become rather cumbersome to use. But no general solution exists for polynomials of fifth-order and higher. Many equations have no analytical solution at all. So what we are really seeking is a method for solving for the root of a nonlinear equation. What we would like to have is a reliable numerical method that will provide accurate results with a minimum of human intervention.
The bisection method is the simplest scheme for roots finding. You just must:
  1. Find an interval:  this is expected that the root is inside;
  2. Half the interval:  the internal is divided into two parts;
  3. Choose the good half: one of the half must contain the root;
  4. Reset the extremes: if the root is at the right, the middle is the right extreme value, otherwise the middle is the left extreme:
  5. Start again: repeats the process from 1, where the new intervals are
It is extremely important to use functions to break the code into manageable pieces. This modularity aids immensely in the clarity and readability of the code - the main program simply becomes a list of calls to the various functions, so that the overall logic of the program is nearly transparent.
Recursive algorithm
Matlab allows a function to call other function. This permits us to break a large programming task into a sequence of smaller and smaller ones, until we get to a point where it is easy to solve. Furthermore, a function can call itself. There are problems that naturally lend themselves to a recursive solutions and the root finding is one of them.
Taylor’s Series
In 1715 Brook Taylor, secretary of the Royal Society, published Methodus incrementorum directa et inversa in which appears one of the most useful expression in much of mathematics and certainly numerical analysis. Although previously known to the Scottish mathematician James Gregory, and probably to Jean Bernoulli as well, we know it as the Taylor Series.
Interpolation and approximation
In some case of roots finding, an approximation to a function is useful to finding roots, even though we had the exact function at our disposal. A more common circumstance is that we do not know the exact function, but build our knowledge of it as we acquire more information about it. Several ways to approximate a function and its derivatives exist. With interpolation, an approximating polynomial us found that exactly describes the function being approximated at a set of specified points. Curve fitting approximates a function in a general sense, without being constrained to agree with the function at every point. 



Clinical pharmacokinetics and pharmacodynamics: concepts and applications


=====
Malcolm Rowland, Thomas N Tozer, Clinical pharmacokinetics and pharmacodynamics: concepts and applications. Fourth Edition. Lippincott and Wilkins: 2011.
=====

Pharmacokinetics and pharmacodynamics are cornerstones in the industrial design, selection, and development of new drugs.
Understanding why individuals vary in their response to drugs is central to personalizing drug therapy.

Basic considerations

Therapeutic Relevance

Those patients who suffer from chronic ailments such as diabetes or epilepsy may have to take drugs every day for the rest of their lives. At the other extreme are those who take a single dose of a drug to relieve an occasional headache. The duration of drug therapy is usually between these extremes. The manner in which a drug is take is called a dosage regimen.
What determines the therapeutic dose of a drug and its manner and frequency of administration, as well as events experienced over time by patients on taking the recommended dosage regimens, constitutes the body of the textbook.
Input-response phases   

Progress has only been forthcoming by realizing that concentrations at active sites, rather than dose admistered, drive responses, and thta to achieve and maintain a response, it is necessary to ensure the generation of the appropriate exposure-time profile of drug within the body, which in turn requires an understanding of the factors controlling this exposure profile.

The issue of time delays between drug administration and response is not confined to pharmacokinetics but extends to pharmacodynamics too. Part of this delay is a result of the time required for the drug to distribute to the target site, which is often in a cell within  in a organ or tissue, such as the brain. Part is also a result of delays within the affected system within the body.
  
Side-effects. A common and clinically significant toxicity of many anticancer drugs is leukopenia, an abnormal fall in the number of leukocytes in blood.

The lesson is clear: understanding the specific concentration-response time relationships help in the management and optimal used of drugs.
   
Variability in drug response

If we were all alike, there would only be one dose strength and regimen of a drug needed for the entire patient population.

 Fundamental concepts and terminology
This chapter introduces input-exposure (pharmacokinetics) and exposure-response (pharmacodynamics)  relationship and defines the terms commonly used in these areas.
Applications of pharmacokinetics and pharmacodynamics in drug therapy:
ü      To relate temporal patterns of response to drug administration following acute and chronic dosing;
ü      To help provide a rational basis for drug design, drug selection, and dosage regimen design:
ü      To provide a means for rationally initiating and individualizing drug administration in patients;
A distinction must be made between those drugs that act locally and those that act systematically. Locally acting drugs are administered at the local site where they are needed such as eye drops, nasal sprays, intravaginal creams, and topical preparations for treating skin diseases. It is given emphasis to those drugs that act within the blood or that must delivered to the site of action by the circulatory system, we say such drugs act systemically.

Kinetics following and intravenous bolus dose 

Administering a drug intravenously ensures that the entire dose enters the systematic circulation. By rapid injection, elevated concentrations of drug can be promptly achieved; by continuous infusion at a controlled rate, a constant concentration, and often response, can be maintained. With no other route of administration can plasma concentration be as promptly and efficiently controlled. Of the two intravascular routes, the i.v one is the most frequently employed. Intra-arterial administration, which has greater inherent manipulative dangers, is reserved for situations in which drug localization in a specific organ or tissue is desired. It is achieved by inputting drug into the artery directly supplying the target tissue.
Appreciation of kinetics concepts  
Why do we get a linear decline when plotting the data on a semilogarithmic scale, and what determines the large difference seen in the profiles for the various drugs?
Volume of distribution and clearance 


Schematic diagram of a perfused organ system. Drug placed into a well-stirred reservoir,
volume V, from which fluid perfuses an extractor at flow rate Q.
The rate of extraction can be expressed as a fraction E of the of presentation, Q.C.
The rate that escaping drug returns to the reservoir is Q.Cout. For modeling purposes,
the amount of drug in the extractor is negligible compared
to the amount of drug 
contained in the reservoir. 

Clearance is the volume of the fluid to the eliminating organ, extractor, that is efficiently, completely cleared of drug per unit of time.

Membranes and distribution
So far emphasis was placed on general input-exposure relationships after a single intravenous (i.v) bolus. Now we focus on the role and function of membranes primarily in the context of determinants of drug distribution, the principles apply as well to drug elimination and absorption. Drugs must also pass through membranes to reach the site of action. This chapter also explores the process of distribution itself and its role in clinical pharmacokinetics from a physiologic point of view.
Membranes
Movement through membranes is known as drug transport.

Elimination

This chapter is concerned with the elimination processes and particularly with the concept of clearance. In the chapter kinetics following an intravenous bolus dose, the method of quantifying clearance following a single dose iv bolus was presented. Herein its physiologic meaning is given. 


sábado, 8 de novembro de 2014

Dimerization and receptor-receptor interactions (papers)


Dimerization and ligand binding

Fanelli et al (2011).

Allosteric proteins
In biochemistryallosteric regulation is the regulation of an enzyme or other protein by binding an effector molecule at the protein's allosteric site (i.e, a site other than the protein's active site).


Effectors that enhance the protein's activity are referred to as allosteric activators, whereas those that decrease the protein's activity are called allosteric inhibitors.

Source: google image, 2014


G protein-coupled receptors (GPCRs), also known as seven-transmembrane domain receptors constitute a large protein family of receptors that sense molecules outside the cell and activate inside signal transduction pathways and, ultimately, cellular responses.

In humans, there are four types of adenosine receptors. Each is encoded by a separate gene and has different functions, although with some overlap. Both A1 receptors and A2A play roles in the heart, regulating myocardial oxygen consumption and coronary blood flow, while the A2A receptor also has broader anti-inflammatory effects throughout the body.

A2AR dimerization affects the communication networks intrinsic to the receptor fold in a way dependent on the dimer architecture.

G protein Coupled Receptors (GPCRs) are allosteric proteins whose functioning fundamentals are the communication between the two poles of the helix bundle.

The representation of GPCR structures as Networks of Interacting Amino Acids (NIAA) can be a meaningful way to decipher the impact of ligand and of dimerization oligomerization on the molecular communication intrinsic to the protein fold.

Dimerization and ligand binding effect the structure network of A2A adenosine receptor

It is known that dimerization is a byproduct of evolution rather than instantaneous phenomenon. On the other hand, ligand binding, in spite of the fact of being well defined by evolution itself, is a phenomenon that happens on the moment, it does not exist, unless the ligand is on the range of reaching. In essence, we have two phenomena in different timescales: seconds and decades.

Bioinformatics and mathematical modelling

Guidolin et al (2011).

"A fundamental consequence of the view describing GPCRs as interacting structures, with the likely formation at the plasma membrane of receptor aggregates of multiple receptors (Receptor Mosaics) is that it is no longer possible to describe signal transduction simply as the result of the binding of the chemical signal to its receptor, but rather as the result of a filtering/integration of chemical signals by the Receptor Mosaics (RMs) and membrane-associated proteins.“

"....integrative functions emerging from the complex behaviour of these RMs"

"The concept of intra-membrane receptor–receptor interactions (RRIs) between different types of GPCRs and evidence for their existence was introduced by Agnati and Fuxe in 1980/81 through analysis of the effects of neuropeptides on the binding characteristics of monoamine receptors in membrane preparations from discrete brain regions"

"the hypothesis of high-order GPCR oligomers implies the existence of specific interaction interfaces allowing the assembly of macromolecular complexes."

"Experimental research, significant efforts were spent in bioinformatics to provide suggestions on the protein regions potentially playing a role in dimerization/oligomerization".

A first important consequence of such an arrangement is that the decoding process becomes a branched process (bifurcations) already at the receptor level in the plasma membrane allowing the different activation of some of the possible intracellular molecular pathways.

A second theoretical consequence is that some engrams can be stored.

Allosteric perturbations involve a shift of a population of pre-existing conformations.

Allostery can occur without a change in shape but purely in dynamics.



All of these analyses only provide suggestions that should be confirmed by experimental data since these studies, in general, do not consider several variables such as the micro-environment where the GPCRs are localized.

G-protein-coupled receptor dynamics: dimerization and activation models compared with experiment

 Taddese et al (2012).

 GPCRs (G-protein-coupled receptors) are dynamic structures, as shown by their ability to dimerize, domain swap, oligomerize, activate G-proteins, and signal via arrestin.

Molecular dynamics is well suited to studying protein dynamics.

This provides evidence that bivalent ligands may indeed interact with 
two binding sites in two receptors.

The receptor–dimer cooperativity index

 
Casadó et al (2007).

Almost all existing models that explain heptahelical G-protein-coupled receptor (GPCR) operation 
are based on the occurrence of monomeric receptor species.
However, an increasing number of studies show that many G-protein-coupled heptahelical 
membrane receptors (HMR) 
are expressed in the plasma membrane as dimers.

"HMR are a superfamily of receptors with enormous current and future therapeutic potential."

The main aim of any of those models was to 
explain the behaviour of G protein-coupled receptors (GPCR).

The majority of the models devised for HMR comes from the noncooperative 3-state mechanism 
proposed 50 years ago 
to explain the behaviour of nicotinic acetylcholine receptors, 
which are ligand-gated ion channels but not HMR

Some models for non-interactive receptors


The model considers an orthosteric center 
where the agonist binds and subsequently displaces the equilibrium towards the active state.

Some models for non-interactive receptors
They may modify the value of the dissociation constant (KD) of the agonist but not the total amount of receptors (RTotal). 

The G protein, acting as an allosteric modulator, modifies the agonist binding and/or affects the equilibrium between R and R*.
Since the allosteric modulator (in this case the G protein) does not compete with orthosteric compounds, maximum binding is not affected but KD is.


None of these models is however able to satisfactorily explain the binding characteristics of receptors displaying biphasic binding isotherms (e.g., nonlinear Scatchard plots), such as, profiles of agonist binding to the orthosteric center with Hill coefficients different from 1

Single-molecule imaging revealed dynamic GPCR dimerization   Kasai and  Kusumi (2014). 

"G-protein-coupled receptors (GPCRs) undergo dynamic equilibrium between monomers and dimers"

"Within one second, GPCRs typically undergo several cycles of monomer and homo-dimer formation with different partners."

"Many GPCR dimers reported in the literature might actually be artifacts due to overexpression, particularly in the case of the class-A GPCRs"


Papers

Francesca Fanelli, Angelo Felline, Dimerization and ligand binding affect the structure network of A2A adenosine receptor, Biochimica et Biophysica Acta 1808 (2011) 1256–1266.  
Diego Guidolin, Francisco Ciruela, Susanna Genedani, Michele Guescini, Cinzia Tortorella, Giovanna Albertin, Kjell Fuxe, Luigi Francesco Agnati. Bioinformatics and mathematical modelling in the study of receptor–receptor interactions and receptor oligomerization. Biochimica et Biophysica Acta 1808 (2011) 1267–1283.
Bruck Taddese, Lisa M. Simpson, Ian D. Wall, Frank E. Blaney, Nathan J. Kidley, Henry S.X. Clark, Richard E. Smith, Graham J.G. Upton, Paul R. Gouldson, George Psaroudakis, Robert P. Bywater and Christopher A. Reynolds, G-protein-coupled receptor dynamics: dimerization and activation models compared with experiment. Biochemical Society Transactions (2012) Volume 40, part 2.
Vincent Casadó, Antoni Cortés, Francisco Ciruela, Josefa Mallol, Sergi Ferré, Carmen Lluis, Enric I. Canela, Rafael Franco. Old and new ways to calculate the affinity of agonists and antagonists interacting with G-protein-coupled monomeric and dimeric receptors: The receptor–dimer cooperativity index. Pharmacology & Therapeutics 116 (2007) 343–354
Rinshi S Kasai and Akihiro Kusumi. Single-molecule imaging revealed dynamic GPCR dimerization. Current Opinion in Cell Biology 2014, 27:78–86.